00:00:00.001 Started by upstream project "autotest-per-patch" build number 132777 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.014 using credential 00000000-0000-0000-0000-000000000002 00:00:00.016 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.052 Using shallow fetch with depth 1 00:00:00.052 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.052 > git --version # timeout=10 00:00:00.082 > git --version # 'git version 2.39.2' 00:00:00.082 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.280 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.292 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.306 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.306 > git config core.sparsecheckout # timeout=10 00:00:02.317 > git read-tree -mu HEAD # timeout=10 00:00:02.342 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.366 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.366 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.780 [Pipeline] Start of Pipeline 00:00:02.797 [Pipeline] library 00:00:02.799 Loading library shm_lib@master 00:00:02.799 Library shm_lib@master is cached. Copying from home. 00:00:02.814 [Pipeline] node 00:27:47.920 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest_2 00:27:47.922 [Pipeline] { 00:27:47.934 [Pipeline] catchError 00:27:47.936 [Pipeline] { 00:27:47.953 [Pipeline] wrap 00:27:47.963 [Pipeline] { 00:27:47.972 [Pipeline] stage 00:27:47.974 [Pipeline] { (Prologue) 00:27:47.992 [Pipeline] echo 00:27:47.994 Node: VM-host-SM0 00:27:47.998 [Pipeline] cleanWs 00:27:48.005 [WS-CLEANUP] Deleting project workspace... 00:27:48.005 [WS-CLEANUP] Deferred wipeout is used... 00:27:48.010 [WS-CLEANUP] done 00:27:48.237 [Pipeline] setCustomBuildProperty 00:27:48.358 [Pipeline] httpRequest 00:27:48.759 [Pipeline] echo 00:27:48.760 Sorcerer 10.211.164.101 is alive 00:27:48.767 [Pipeline] retry 00:27:48.769 [Pipeline] { 00:27:48.780 [Pipeline] httpRequest 00:27:48.784 HttpMethod: GET 00:27:48.785 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:27:48.785 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:27:48.786 Response Code: HTTP/1.1 200 OK 00:27:48.786 Success: Status code 200 is in the accepted range: 200,404 00:27:48.787 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:27:48.931 [Pipeline] } 00:27:48.946 [Pipeline] // retry 00:27:48.952 [Pipeline] sh 00:27:49.228 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:27:49.241 [Pipeline] httpRequest 00:27:49.641 [Pipeline] echo 00:27:49.643 Sorcerer 10.211.164.101 is alive 00:27:49.651 [Pipeline] retry 00:27:49.653 [Pipeline] { 00:27:49.667 [Pipeline] httpRequest 00:27:49.672 HttpMethod: GET 00:27:49.672 URL: http://10.211.164.101/packages/spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:27:49.673 Sending request to url: http://10.211.164.101/packages/spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:27:49.673 Response Code: HTTP/1.1 200 OK 00:27:49.674 Success: Status code 200 is in the accepted range: 200,404 00:27:49.674 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:27:51.946 [Pipeline] } 00:27:51.966 [Pipeline] // retry 00:27:51.975 [Pipeline] sh 00:27:52.260 + tar --no-same-owner -xf spdk_afe42438afd8d09d1fba88e960ce92b846fc8579.tar.gz 00:27:55.644 [Pipeline] sh 00:27:55.924 + git -C spdk log --oneline -n5 00:27:55.924 afe42438a env: use 4-KiB memory mapping in no-huge mode 00:27:55.924 cabd61f7f env: extend the page table to support 4-KiB mapping 00:27:55.924 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 00:27:55.924 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 00:27:55.924 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:27:55.941 [Pipeline] writeFile 00:27:55.956 [Pipeline] sh 00:27:56.237 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:27:56.250 [Pipeline] sh 00:27:56.531 + cat autorun-spdk.conf 00:27:56.531 SPDK_RUN_FUNCTIONAL_TEST=1 00:27:56.531 SPDK_RUN_ASAN=1 00:27:56.531 SPDK_RUN_UBSAN=1 00:27:56.531 SPDK_TEST_RAID=1 00:27:56.531 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:27:56.538 RUN_NIGHTLY=0 00:27:56.558 [Pipeline] } 00:27:56.573 [Pipeline] // stage 00:27:56.587 [Pipeline] stage 00:27:56.589 [Pipeline] { (Run VM) 00:27:56.601 [Pipeline] sh 00:27:56.879 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:27:56.879 + echo 'Start stage prepare_nvme.sh' 00:27:56.879 Start stage prepare_nvme.sh 00:27:56.879 + [[ -n 2 ]] 00:27:56.879 + disk_prefix=ex2 00:27:56.879 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:27:56.879 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:27:56.879 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:27:56.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:27:56.879 ++ SPDK_RUN_ASAN=1 00:27:56.879 ++ SPDK_RUN_UBSAN=1 00:27:56.879 ++ SPDK_TEST_RAID=1 00:27:56.879 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:27:56.879 ++ RUN_NIGHTLY=0 00:27:56.879 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:27:56.879 + nvme_files=() 00:27:56.879 + declare -A nvme_files 00:27:56.879 + backend_dir=/var/lib/libvirt/images/backends 00:27:56.879 + nvme_files['nvme.img']=5G 00:27:56.879 + nvme_files['nvme-cmb.img']=5G 00:27:56.879 + nvme_files['nvme-multi0.img']=4G 00:27:56.879 + nvme_files['nvme-multi1.img']=4G 00:27:56.879 + nvme_files['nvme-multi2.img']=4G 00:27:56.879 + nvme_files['nvme-openstack.img']=8G 00:27:56.879 + nvme_files['nvme-zns.img']=5G 00:27:56.879 + (( SPDK_TEST_NVME_PMR == 1 )) 00:27:56.879 + (( SPDK_TEST_FTL == 1 )) 00:27:56.880 + (( SPDK_TEST_NVME_FDP == 1 )) 00:27:56.880 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:27:56.880 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:27:56.880 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:27:56.880 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:27:56.880 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:27:56.880 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:27:56.880 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:27:56.880 + for nvme in "${!nvme_files[@]}" 00:27:56.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:27:57.137 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:27:57.137 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:27:57.137 + echo 'End stage prepare_nvme.sh' 00:27:57.137 End stage prepare_nvme.sh 00:27:57.148 [Pipeline] sh 00:27:57.427 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:27:57.427 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:27:57.427 00:27:57.427 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:27:57.427 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:27:57.427 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:27:57.427 HELP=0 00:27:57.427 DRY_RUN=0 00:27:57.427 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:27:57.427 NVME_DISKS_TYPE=nvme,nvme, 00:27:57.427 NVME_AUTO_CREATE=0 00:27:57.427 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:27:57.427 NVME_CMB=,, 00:27:57.427 NVME_PMR=,, 00:27:57.427 NVME_ZNS=,, 00:27:57.427 NVME_MS=,, 00:27:57.427 NVME_FDP=,, 00:27:57.427 SPDK_VAGRANT_DISTRO=fedora39 00:27:57.427 SPDK_VAGRANT_VMCPU=10 00:27:57.427 SPDK_VAGRANT_VMRAM=12288 00:27:57.427 SPDK_VAGRANT_PROVIDER=libvirt 00:27:57.427 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:27:57.427 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:27:57.427 SPDK_OPENSTACK_NETWORK=0 00:27:57.427 VAGRANT_PACKAGE_BOX=0 00:27:57.427 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:27:57.427 FORCE_DISTRO=true 00:27:57.427 VAGRANT_BOX_VERSION= 00:27:57.427 EXTRA_VAGRANTFILES= 00:27:57.427 NIC_MODEL=e1000 00:27:57.427 00:27:57.427 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:27:57.427 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:28:00.730 Bringing machine 'default' up with 'libvirt' provider... 00:28:01.295 ==> default: Creating image (snapshot of base box volume). 00:28:01.551 ==> default: Creating domain with the following settings... 00:28:01.552 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733721527_e9538cbe62878904c123 00:28:01.552 ==> default: -- Domain type: kvm 00:28:01.552 ==> default: -- Cpus: 10 00:28:01.552 ==> default: -- Feature: acpi 00:28:01.552 ==> default: -- Feature: apic 00:28:01.552 ==> default: -- Feature: pae 00:28:01.552 ==> default: -- Memory: 12288M 00:28:01.552 ==> default: -- Memory Backing: hugepages: 00:28:01.552 ==> default: -- Management MAC: 00:28:01.552 ==> default: -- Loader: 00:28:01.552 ==> default: -- Nvram: 00:28:01.552 ==> default: -- Base box: spdk/fedora39 00:28:01.552 ==> default: -- Storage pool: default 00:28:01.552 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733721527_e9538cbe62878904c123.img (20G) 00:28:01.552 ==> default: -- Volume Cache: default 00:28:01.552 ==> default: -- Kernel: 00:28:01.552 ==> default: -- Initrd: 00:28:01.552 ==> default: -- Graphics Type: vnc 00:28:01.552 ==> default: -- Graphics Port: -1 00:28:01.552 ==> default: -- Graphics IP: 127.0.0.1 00:28:01.552 ==> default: -- Graphics Password: Not defined 00:28:01.552 ==> default: -- Video Type: cirrus 00:28:01.552 ==> default: -- Video VRAM: 9216 00:28:01.552 ==> default: -- Sound Type: 00:28:01.552 ==> default: -- Keymap: en-us 00:28:01.552 ==> default: -- TPM Path: 00:28:01.552 ==> default: -- INPUT: type=mouse, bus=ps2 00:28:01.552 ==> default: -- Command line args: 00:28:01.552 ==> default: -> value=-device, 00:28:01.552 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:28:01.552 ==> default: -> value=-drive, 00:28:01.552 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:28:01.552 ==> default: -> value=-device, 00:28:01.552 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:01.552 ==> default: -> value=-device, 00:28:01.552 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:28:01.552 ==> default: -> value=-drive, 00:28:01.552 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:28:01.552 ==> default: -> value=-device, 00:28:01.552 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:01.552 ==> default: -> value=-drive, 00:28:01.552 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:28:01.552 ==> default: -> value=-device, 00:28:01.552 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:01.552 ==> default: -> value=-drive, 00:28:01.552 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:28:01.552 ==> default: -> value=-device, 00:28:01.552 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:01.809 ==> default: Creating shared folders metadata... 00:28:01.809 ==> default: Starting domain. 00:28:03.706 ==> default: Waiting for domain to get an IP address... 00:28:18.582 ==> default: Waiting for SSH to become available... 00:28:19.568 ==> default: Configuring and enabling network interfaces... 00:28:24.847 default: SSH address: 192.168.121.148:22 00:28:24.847 default: SSH username: vagrant 00:28:24.847 default: SSH auth method: private key 00:28:26.222 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:28:34.324 ==> default: Mounting SSHFS shared folder... 00:28:36.223 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:28:36.223 ==> default: Checking Mount.. 00:28:37.633 ==> default: Folder Successfully Mounted! 00:28:37.633 ==> default: Running provisioner: file... 00:28:38.565 default: ~/.gitconfig => .gitconfig 00:28:38.823 00:28:38.823 SUCCESS! 00:28:38.823 00:28:38.823 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:28:38.823 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:28:38.823 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:28:38.823 00:28:38.831 [Pipeline] } 00:28:38.845 [Pipeline] // stage 00:28:38.853 [Pipeline] dir 00:28:38.854 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:28:38.855 [Pipeline] { 00:28:38.867 [Pipeline] catchError 00:28:38.868 [Pipeline] { 00:28:38.880 [Pipeline] sh 00:28:39.157 + vagrant ssh-config --host vagrant 00:28:39.158 + sed -ne /^Host/,$p 00:28:39.158 + tee ssh_conf 00:28:42.468 Host vagrant 00:28:42.468 HostName 192.168.121.148 00:28:42.468 User vagrant 00:28:42.468 Port 22 00:28:42.468 UserKnownHostsFile /dev/null 00:28:42.468 StrictHostKeyChecking no 00:28:42.468 PasswordAuthentication no 00:28:42.468 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:28:42.468 IdentitiesOnly yes 00:28:42.468 LogLevel FATAL 00:28:42.468 ForwardAgent yes 00:28:42.468 ForwardX11 yes 00:28:42.468 00:28:42.480 [Pipeline] withEnv 00:28:42.482 [Pipeline] { 00:28:42.495 [Pipeline] sh 00:28:42.772 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:28:42.772 source /etc/os-release 00:28:42.772 [[ -e /image.version ]] && img=$(< /image.version) 00:28:42.772 # Minimal, systemd-like check. 00:28:42.772 if [[ -e /.dockerenv ]]; then 00:28:42.772 # Clear garbage from the node's name: 00:28:42.772 # agt-er_autotest_547-896 -> autotest_547-896 00:28:42.772 # $HOSTNAME is the actual container id 00:28:42.772 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:28:42.772 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:28:42.772 # We can assume this is a mount from a host where container is running, 00:28:42.772 # so fetch its hostname to easily identify the target swarm worker. 00:28:42.772 container="$(< /etc/hostname) ($agent)" 00:28:42.772 else 00:28:42.772 # Fallback 00:28:42.772 container=$agent 00:28:42.772 fi 00:28:42.772 fi 00:28:42.772 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:28:42.772 00:28:43.041 [Pipeline] } 00:28:43.056 [Pipeline] // withEnv 00:28:43.064 [Pipeline] setCustomBuildProperty 00:28:43.077 [Pipeline] stage 00:28:43.079 [Pipeline] { (Tests) 00:28:43.094 [Pipeline] sh 00:28:43.372 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:28:43.644 [Pipeline] sh 00:28:43.922 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:28:44.197 [Pipeline] timeout 00:28:44.197 Timeout set to expire in 1 hr 30 min 00:28:44.199 [Pipeline] { 00:28:44.215 [Pipeline] sh 00:28:44.527 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:28:45.093 HEAD is now at afe42438a env: use 4-KiB memory mapping in no-huge mode 00:28:45.104 [Pipeline] sh 00:28:45.382 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:28:45.652 [Pipeline] sh 00:28:45.929 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:28:46.200 [Pipeline] sh 00:28:46.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:28:46.736 ++ readlink -f spdk_repo 00:28:46.736 + DIR_ROOT=/home/vagrant/spdk_repo 00:28:46.736 + [[ -n /home/vagrant/spdk_repo ]] 00:28:46.736 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:28:46.736 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:28:46.736 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:28:46.736 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:28:46.736 + [[ -d /home/vagrant/spdk_repo/output ]] 00:28:46.736 + [[ raid-vg-autotest == pkgdep-* ]] 00:28:46.736 + cd /home/vagrant/spdk_repo 00:28:46.736 + source /etc/os-release 00:28:46.736 ++ NAME='Fedora Linux' 00:28:46.736 ++ VERSION='39 (Cloud Edition)' 00:28:46.736 ++ ID=fedora 00:28:46.736 ++ VERSION_ID=39 00:28:46.736 ++ VERSION_CODENAME= 00:28:46.736 ++ PLATFORM_ID=platform:f39 00:28:46.736 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:28:46.736 ++ ANSI_COLOR='0;38;2;60;110;180' 00:28:46.736 ++ LOGO=fedora-logo-icon 00:28:46.736 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:28:46.736 ++ HOME_URL=https://fedoraproject.org/ 00:28:46.736 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:28:46.736 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:28:46.736 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:28:46.736 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:28:46.736 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:28:46.736 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:28:46.736 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:28:46.736 ++ SUPPORT_END=2024-11-12 00:28:46.736 ++ VARIANT='Cloud Edition' 00:28:46.736 ++ VARIANT_ID=cloud 00:28:46.736 + uname -a 00:28:46.736 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:28:46.736 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:28:47.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:47.303 Hugepages 00:28:47.303 node hugesize free / total 00:28:47.303 node0 1048576kB 0 / 0 00:28:47.303 node0 2048kB 0 / 0 00:28:47.303 00:28:47.304 Type BDF Vendor Device NUMA Driver Device Block devices 00:28:47.304 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:28:47.304 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:28:47.304 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:28:47.304 + rm -f /tmp/spdk-ld-path 00:28:47.304 + source autorun-spdk.conf 00:28:47.304 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:28:47.304 ++ SPDK_RUN_ASAN=1 00:28:47.304 ++ SPDK_RUN_UBSAN=1 00:28:47.304 ++ SPDK_TEST_RAID=1 00:28:47.304 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:28:47.304 ++ RUN_NIGHTLY=0 00:28:47.304 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:28:47.304 + [[ -n '' ]] 00:28:47.304 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:28:47.304 + for M in /var/spdk/build-*-manifest.txt 00:28:47.304 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:28:47.304 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:28:47.304 + for M in /var/spdk/build-*-manifest.txt 00:28:47.304 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:28:47.304 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:28:47.304 + for M in /var/spdk/build-*-manifest.txt 00:28:47.304 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:28:47.304 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:28:47.304 ++ uname 00:28:47.304 + [[ Linux == \L\i\n\u\x ]] 00:28:47.304 + sudo dmesg -T 00:28:47.304 + sudo dmesg --clear 00:28:47.304 + dmesg_pid=5259 00:28:47.304 + [[ Fedora Linux == FreeBSD ]] 00:28:47.304 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:47.304 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:47.304 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:28:47.304 + [[ -x /usr/src/fio-static/fio ]] 00:28:47.304 + sudo dmesg -Tw 00:28:47.304 + export FIO_BIN=/usr/src/fio-static/fio 00:28:47.304 + FIO_BIN=/usr/src/fio-static/fio 00:28:47.304 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:28:47.304 + [[ ! -v VFIO_QEMU_BIN ]] 00:28:47.304 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:28:47.304 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:47.304 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:47.304 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:28:47.304 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:47.304 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:47.304 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:28:47.562 05:19:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:28:47.562 05:19:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:28:47.562 05:19:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:28:47.562 05:19:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:28:47.562 05:19:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:28:47.562 05:19:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:28:47.562 05:19:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:28:47.562 05:19:34 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:28:47.562 05:19:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:28:47.562 05:19:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:28:47.562 05:19:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:28:47.562 05:19:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:47.562 05:19:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:28:47.562 05:19:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:47.562 05:19:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.562 05:19:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.562 05:19:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.562 05:19:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.562 05:19:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.562 05:19:34 -- paths/export.sh@5 -- $ export PATH 00:28:47.562 05:19:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.562 05:19:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:47.562 05:19:34 -- common/autobuild_common.sh@493 -- $ date +%s 00:28:47.562 05:19:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733721574.XXXXXX 00:28:47.562 05:19:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733721574.e55xXd 00:28:47.562 05:19:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:28:47.562 05:19:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:28:47.562 05:19:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:47.562 05:19:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:47.562 05:19:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:47.562 05:19:34 -- common/autobuild_common.sh@509 -- $ get_config_params 00:28:47.562 05:19:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:28:47.562 05:19:34 -- common/autotest_common.sh@10 -- $ set +x 00:28:47.562 05:19:34 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:28:47.562 05:19:34 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:28:47.562 05:19:34 -- pm/common@17 -- $ local monitor 00:28:47.562 05:19:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.563 05:19:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.563 05:19:34 -- pm/common@21 -- $ date +%s 00:28:47.563 05:19:34 -- pm/common@25 -- $ sleep 1 00:28:47.563 05:19:34 -- pm/common@21 -- $ date +%s 00:28:47.563 05:19:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721574 00:28:47.563 05:19:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721574 00:28:47.563 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721574_collect-cpu-load.pm.log 00:28:47.563 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721574_collect-vmstat.pm.log 00:28:48.505 05:19:35 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:28:48.505 05:19:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:28:48.505 05:19:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:28:48.505 05:19:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:48.505 05:19:35 -- spdk/autobuild.sh@16 -- $ date -u 00:28:48.505 Mon Dec 9 05:19:35 AM UTC 2024 00:28:48.505 05:19:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:28:48.505 v25.01-pre-280-gafe42438a 00:28:48.505 05:19:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:28:48.505 05:19:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:28:48.505 05:19:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:28:48.505 05:19:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:28:48.505 05:19:35 -- common/autotest_common.sh@10 -- $ set +x 00:28:48.505 ************************************ 00:28:48.505 START TEST asan 00:28:48.505 ************************************ 00:28:48.505 using asan 00:28:48.505 05:19:35 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:28:48.505 00:28:48.505 real 0m0.000s 00:28:48.505 user 0m0.000s 00:28:48.505 sys 0m0.000s 00:28:48.505 05:19:35 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:28:48.505 05:19:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:28:48.505 ************************************ 00:28:48.505 END TEST asan 00:28:48.505 ************************************ 00:28:48.505 05:19:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:28:48.505 05:19:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:28:48.505 05:19:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:28:48.505 05:19:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:28:48.505 05:19:35 -- common/autotest_common.sh@10 -- $ set +x 00:28:48.505 ************************************ 00:28:48.505 START TEST ubsan 00:28:48.505 ************************************ 00:28:48.505 using ubsan 00:28:48.505 05:19:35 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:28:48.505 00:28:48.505 real 0m0.000s 00:28:48.505 user 0m0.000s 00:28:48.505 sys 0m0.000s 00:28:48.505 05:19:35 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:28:48.505 05:19:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:28:48.505 ************************************ 00:28:48.505 END TEST ubsan 00:28:48.505 ************************************ 00:28:48.763 05:19:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:28:48.763 05:19:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:28:48.763 05:19:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:28:48.763 05:19:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:28:48.763 05:19:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:28:48.763 05:19:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:28:48.763 05:19:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:28:48.763 05:19:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:28:48.763 05:19:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:28:48.763 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:48.763 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:28:49.346 Using 'verbs' RDMA provider 00:29:05.171 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:29:17.389 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:29:17.389 Creating mk/config.mk...done. 00:29:17.389 Creating mk/cc.flags.mk...done. 00:29:17.390 Type 'make' to build. 00:29:17.390 05:20:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:29:17.390 05:20:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:29:17.390 05:20:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:29:17.390 05:20:03 -- common/autotest_common.sh@10 -- $ set +x 00:29:17.390 ************************************ 00:29:17.390 START TEST make 00:29:17.390 ************************************ 00:29:17.390 05:20:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:29:17.390 make[1]: Nothing to be done for 'all'. 00:29:29.589 The Meson build system 00:29:29.589 Version: 1.5.0 00:29:29.589 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:29:29.589 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:29:29.589 Build type: native build 00:29:29.589 Program cat found: YES (/usr/bin/cat) 00:29:29.589 Project name: DPDK 00:29:29.589 Project version: 24.03.0 00:29:29.589 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:29:29.589 C linker for the host machine: cc ld.bfd 2.40-14 00:29:29.589 Host machine cpu family: x86_64 00:29:29.589 Host machine cpu: x86_64 00:29:29.589 Message: ## Building in Developer Mode ## 00:29:29.589 Program pkg-config found: YES (/usr/bin/pkg-config) 00:29:29.589 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:29:29.589 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:29:29.589 Program python3 found: YES (/usr/bin/python3) 00:29:29.589 Program cat found: YES (/usr/bin/cat) 00:29:29.589 Compiler for C supports arguments -march=native: YES 00:29:29.589 Checking for size of "void *" : 8 00:29:29.589 Checking for size of "void *" : 8 (cached) 00:29:29.589 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:29:29.589 Library m found: YES 00:29:29.589 Library numa found: YES 00:29:29.589 Has header "numaif.h" : YES 00:29:29.589 Library fdt found: NO 00:29:29.589 Library execinfo found: NO 00:29:29.589 Has header "execinfo.h" : YES 00:29:29.589 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:29:29.589 Run-time dependency libarchive found: NO (tried pkgconfig) 00:29:29.590 Run-time dependency libbsd found: NO (tried pkgconfig) 00:29:29.590 Run-time dependency jansson found: NO (tried pkgconfig) 00:29:29.590 Run-time dependency openssl found: YES 3.1.1 00:29:29.590 Run-time dependency libpcap found: YES 1.10.4 00:29:29.590 Has header "pcap.h" with dependency libpcap: YES 00:29:29.590 Compiler for C supports arguments -Wcast-qual: YES 00:29:29.590 Compiler for C supports arguments -Wdeprecated: YES 00:29:29.590 Compiler for C supports arguments -Wformat: YES 00:29:29.590 Compiler for C supports arguments -Wformat-nonliteral: NO 00:29:29.590 Compiler for C supports arguments -Wformat-security: NO 00:29:29.590 Compiler for C supports arguments -Wmissing-declarations: YES 00:29:29.590 Compiler for C supports arguments -Wmissing-prototypes: YES 00:29:29.590 Compiler for C supports arguments -Wnested-externs: YES 00:29:29.590 Compiler for C supports arguments -Wold-style-definition: YES 00:29:29.590 Compiler for C supports arguments -Wpointer-arith: YES 00:29:29.590 Compiler for C supports arguments -Wsign-compare: YES 00:29:29.590 Compiler for C supports arguments -Wstrict-prototypes: YES 00:29:29.590 Compiler for C supports arguments -Wundef: YES 00:29:29.590 Compiler for C supports arguments -Wwrite-strings: YES 00:29:29.590 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:29:29.590 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:29:29.590 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:29:29.590 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:29:29.590 Program objdump found: YES (/usr/bin/objdump) 00:29:29.590 Compiler for C supports arguments -mavx512f: YES 00:29:29.590 Checking if "AVX512 checking" compiles: YES 00:29:29.590 Fetching value of define "__SSE4_2__" : 1 00:29:29.590 Fetching value of define "__AES__" : 1 00:29:29.590 Fetching value of define "__AVX__" : 1 00:29:29.590 Fetching value of define "__AVX2__" : 1 00:29:29.590 Fetching value of define "__AVX512BW__" : (undefined) 00:29:29.590 Fetching value of define "__AVX512CD__" : (undefined) 00:29:29.590 Fetching value of define "__AVX512DQ__" : (undefined) 00:29:29.590 Fetching value of define "__AVX512F__" : (undefined) 00:29:29.590 Fetching value of define "__AVX512VL__" : (undefined) 00:29:29.590 Fetching value of define "__PCLMUL__" : 1 00:29:29.590 Fetching value of define "__RDRND__" : 1 00:29:29.590 Fetching value of define "__RDSEED__" : 1 00:29:29.590 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:29:29.590 Fetching value of define "__znver1__" : (undefined) 00:29:29.590 Fetching value of define "__znver2__" : (undefined) 00:29:29.590 Fetching value of define "__znver3__" : (undefined) 00:29:29.590 Fetching value of define "__znver4__" : (undefined) 00:29:29.590 Library asan found: YES 00:29:29.590 Compiler for C supports arguments -Wno-format-truncation: YES 00:29:29.590 Message: lib/log: Defining dependency "log" 00:29:29.590 Message: lib/kvargs: Defining dependency "kvargs" 00:29:29.590 Message: lib/telemetry: Defining dependency "telemetry" 00:29:29.590 Library rt found: YES 00:29:29.590 Checking for function "getentropy" : NO 00:29:29.590 Message: lib/eal: Defining dependency "eal" 00:29:29.590 Message: lib/ring: Defining dependency "ring" 00:29:29.590 Message: lib/rcu: Defining dependency "rcu" 00:29:29.590 Message: lib/mempool: Defining dependency "mempool" 00:29:29.590 Message: lib/mbuf: Defining dependency "mbuf" 00:29:29.590 Fetching value of define "__PCLMUL__" : 1 (cached) 00:29:29.590 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:29:29.590 Compiler for C supports arguments -mpclmul: YES 00:29:29.590 Compiler for C supports arguments -maes: YES 00:29:29.590 Compiler for C supports arguments -mavx512f: YES (cached) 00:29:29.590 Compiler for C supports arguments -mavx512bw: YES 00:29:29.590 Compiler for C supports arguments -mavx512dq: YES 00:29:29.590 Compiler for C supports arguments -mavx512vl: YES 00:29:29.590 Compiler for C supports arguments -mvpclmulqdq: YES 00:29:29.590 Compiler for C supports arguments -mavx2: YES 00:29:29.590 Compiler for C supports arguments -mavx: YES 00:29:29.590 Message: lib/net: Defining dependency "net" 00:29:29.590 Message: lib/meter: Defining dependency "meter" 00:29:29.590 Message: lib/ethdev: Defining dependency "ethdev" 00:29:29.590 Message: lib/pci: Defining dependency "pci" 00:29:29.590 Message: lib/cmdline: Defining dependency "cmdline" 00:29:29.590 Message: lib/hash: Defining dependency "hash" 00:29:29.590 Message: lib/timer: Defining dependency "timer" 00:29:29.590 Message: lib/compressdev: Defining dependency "compressdev" 00:29:29.590 Message: lib/cryptodev: Defining dependency "cryptodev" 00:29:29.590 Message: lib/dmadev: Defining dependency "dmadev" 00:29:29.590 Compiler for C supports arguments -Wno-cast-qual: YES 00:29:29.590 Message: lib/power: Defining dependency "power" 00:29:29.590 Message: lib/reorder: Defining dependency "reorder" 00:29:29.590 Message: lib/security: Defining dependency "security" 00:29:29.590 Has header "linux/userfaultfd.h" : YES 00:29:29.590 Has header "linux/vduse.h" : YES 00:29:29.590 Message: lib/vhost: Defining dependency "vhost" 00:29:29.590 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:29:29.590 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:29:29.590 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:29:29.590 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:29:29.590 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:29:29.590 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:29:29.590 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:29:29.590 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:29:29.590 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:29:29.590 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:29:29.590 Program doxygen found: YES (/usr/local/bin/doxygen) 00:29:29.590 Configuring doxy-api-html.conf using configuration 00:29:29.590 Configuring doxy-api-man.conf using configuration 00:29:29.590 Program mandb found: YES (/usr/bin/mandb) 00:29:29.590 Program sphinx-build found: NO 00:29:29.590 Configuring rte_build_config.h using configuration 00:29:29.590 Message: 00:29:29.590 ================= 00:29:29.590 Applications Enabled 00:29:29.590 ================= 00:29:29.590 00:29:29.590 apps: 00:29:29.590 00:29:29.590 00:29:29.590 Message: 00:29:29.590 ================= 00:29:29.590 Libraries Enabled 00:29:29.590 ================= 00:29:29.590 00:29:29.590 libs: 00:29:29.590 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:29:29.590 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:29:29.590 cryptodev, dmadev, power, reorder, security, vhost, 00:29:29.590 00:29:29.590 Message: 00:29:29.590 =============== 00:29:29.590 Drivers Enabled 00:29:29.590 =============== 00:29:29.590 00:29:29.590 common: 00:29:29.590 00:29:29.590 bus: 00:29:29.590 pci, vdev, 00:29:29.590 mempool: 00:29:29.590 ring, 00:29:29.590 dma: 00:29:29.590 00:29:29.590 net: 00:29:29.590 00:29:29.590 crypto: 00:29:29.590 00:29:29.591 compress: 00:29:29.591 00:29:29.591 vdpa: 00:29:29.591 00:29:29.591 00:29:29.591 Message: 00:29:29.591 ================= 00:29:29.591 Content Skipped 00:29:29.591 ================= 00:29:29.591 00:29:29.591 apps: 00:29:29.591 dumpcap: explicitly disabled via build config 00:29:29.591 graph: explicitly disabled via build config 00:29:29.591 pdump: explicitly disabled via build config 00:29:29.591 proc-info: explicitly disabled via build config 00:29:29.591 test-acl: explicitly disabled via build config 00:29:29.591 test-bbdev: explicitly disabled via build config 00:29:29.591 test-cmdline: explicitly disabled via build config 00:29:29.591 test-compress-perf: explicitly disabled via build config 00:29:29.591 test-crypto-perf: explicitly disabled via build config 00:29:29.591 test-dma-perf: explicitly disabled via build config 00:29:29.591 test-eventdev: explicitly disabled via build config 00:29:29.591 test-fib: explicitly disabled via build config 00:29:29.591 test-flow-perf: explicitly disabled via build config 00:29:29.591 test-gpudev: explicitly disabled via build config 00:29:29.591 test-mldev: explicitly disabled via build config 00:29:29.591 test-pipeline: explicitly disabled via build config 00:29:29.591 test-pmd: explicitly disabled via build config 00:29:29.591 test-regex: explicitly disabled via build config 00:29:29.591 test-sad: explicitly disabled via build config 00:29:29.591 test-security-perf: explicitly disabled via build config 00:29:29.591 00:29:29.591 libs: 00:29:29.591 argparse: explicitly disabled via build config 00:29:29.591 metrics: explicitly disabled via build config 00:29:29.591 acl: explicitly disabled via build config 00:29:29.591 bbdev: explicitly disabled via build config 00:29:29.591 bitratestats: explicitly disabled via build config 00:29:29.591 bpf: explicitly disabled via build config 00:29:29.591 cfgfile: explicitly disabled via build config 00:29:29.591 distributor: explicitly disabled via build config 00:29:29.591 efd: explicitly disabled via build config 00:29:29.591 eventdev: explicitly disabled via build config 00:29:29.591 dispatcher: explicitly disabled via build config 00:29:29.591 gpudev: explicitly disabled via build config 00:29:29.591 gro: explicitly disabled via build config 00:29:29.591 gso: explicitly disabled via build config 00:29:29.591 ip_frag: explicitly disabled via build config 00:29:29.591 jobstats: explicitly disabled via build config 00:29:29.591 latencystats: explicitly disabled via build config 00:29:29.591 lpm: explicitly disabled via build config 00:29:29.591 member: explicitly disabled via build config 00:29:29.591 pcapng: explicitly disabled via build config 00:29:29.591 rawdev: explicitly disabled via build config 00:29:29.591 regexdev: explicitly disabled via build config 00:29:29.591 mldev: explicitly disabled via build config 00:29:29.591 rib: explicitly disabled via build config 00:29:29.591 sched: explicitly disabled via build config 00:29:29.591 stack: explicitly disabled via build config 00:29:29.591 ipsec: explicitly disabled via build config 00:29:29.591 pdcp: explicitly disabled via build config 00:29:29.591 fib: explicitly disabled via build config 00:29:29.591 port: explicitly disabled via build config 00:29:29.591 pdump: explicitly disabled via build config 00:29:29.591 table: explicitly disabled via build config 00:29:29.591 pipeline: explicitly disabled via build config 00:29:29.591 graph: explicitly disabled via build config 00:29:29.591 node: explicitly disabled via build config 00:29:29.591 00:29:29.591 drivers: 00:29:29.591 common/cpt: not in enabled drivers build config 00:29:29.591 common/dpaax: not in enabled drivers build config 00:29:29.591 common/iavf: not in enabled drivers build config 00:29:29.591 common/idpf: not in enabled drivers build config 00:29:29.591 common/ionic: not in enabled drivers build config 00:29:29.591 common/mvep: not in enabled drivers build config 00:29:29.591 common/octeontx: not in enabled drivers build config 00:29:29.591 bus/auxiliary: not in enabled drivers build config 00:29:29.591 bus/cdx: not in enabled drivers build config 00:29:29.591 bus/dpaa: not in enabled drivers build config 00:29:29.591 bus/fslmc: not in enabled drivers build config 00:29:29.591 bus/ifpga: not in enabled drivers build config 00:29:29.591 bus/platform: not in enabled drivers build config 00:29:29.591 bus/uacce: not in enabled drivers build config 00:29:29.591 bus/vmbus: not in enabled drivers build config 00:29:29.591 common/cnxk: not in enabled drivers build config 00:29:29.591 common/mlx5: not in enabled drivers build config 00:29:29.591 common/nfp: not in enabled drivers build config 00:29:29.591 common/nitrox: not in enabled drivers build config 00:29:29.591 common/qat: not in enabled drivers build config 00:29:29.591 common/sfc_efx: not in enabled drivers build config 00:29:29.591 mempool/bucket: not in enabled drivers build config 00:29:29.591 mempool/cnxk: not in enabled drivers build config 00:29:29.591 mempool/dpaa: not in enabled drivers build config 00:29:29.591 mempool/dpaa2: not in enabled drivers build config 00:29:29.591 mempool/octeontx: not in enabled drivers build config 00:29:29.591 mempool/stack: not in enabled drivers build config 00:29:29.591 dma/cnxk: not in enabled drivers build config 00:29:29.591 dma/dpaa: not in enabled drivers build config 00:29:29.591 dma/dpaa2: not in enabled drivers build config 00:29:29.591 dma/hisilicon: not in enabled drivers build config 00:29:29.591 dma/idxd: not in enabled drivers build config 00:29:29.591 dma/ioat: not in enabled drivers build config 00:29:29.591 dma/skeleton: not in enabled drivers build config 00:29:29.591 net/af_packet: not in enabled drivers build config 00:29:29.591 net/af_xdp: not in enabled drivers build config 00:29:29.591 net/ark: not in enabled drivers build config 00:29:29.591 net/atlantic: not in enabled drivers build config 00:29:29.591 net/avp: not in enabled drivers build config 00:29:29.591 net/axgbe: not in enabled drivers build config 00:29:29.591 net/bnx2x: not in enabled drivers build config 00:29:29.591 net/bnxt: not in enabled drivers build config 00:29:29.591 net/bonding: not in enabled drivers build config 00:29:29.591 net/cnxk: not in enabled drivers build config 00:29:29.591 net/cpfl: not in enabled drivers build config 00:29:29.591 net/cxgbe: not in enabled drivers build config 00:29:29.591 net/dpaa: not in enabled drivers build config 00:29:29.591 net/dpaa2: not in enabled drivers build config 00:29:29.591 net/e1000: not in enabled drivers build config 00:29:29.591 net/ena: not in enabled drivers build config 00:29:29.591 net/enetc: not in enabled drivers build config 00:29:29.591 net/enetfec: not in enabled drivers build config 00:29:29.591 net/enic: not in enabled drivers build config 00:29:29.591 net/failsafe: not in enabled drivers build config 00:29:29.591 net/fm10k: not in enabled drivers build config 00:29:29.591 net/gve: not in enabled drivers build config 00:29:29.591 net/hinic: not in enabled drivers build config 00:29:29.591 net/hns3: not in enabled drivers build config 00:29:29.591 net/i40e: not in enabled drivers build config 00:29:29.591 net/iavf: not in enabled drivers build config 00:29:29.592 net/ice: not in enabled drivers build config 00:29:29.592 net/idpf: not in enabled drivers build config 00:29:29.592 net/igc: not in enabled drivers build config 00:29:29.592 net/ionic: not in enabled drivers build config 00:29:29.592 net/ipn3ke: not in enabled drivers build config 00:29:29.592 net/ixgbe: not in enabled drivers build config 00:29:29.592 net/mana: not in enabled drivers build config 00:29:29.592 net/memif: not in enabled drivers build config 00:29:29.592 net/mlx4: not in enabled drivers build config 00:29:29.592 net/mlx5: not in enabled drivers build config 00:29:29.592 net/mvneta: not in enabled drivers build config 00:29:29.592 net/mvpp2: not in enabled drivers build config 00:29:29.592 net/netvsc: not in enabled drivers build config 00:29:29.592 net/nfb: not in enabled drivers build config 00:29:29.592 net/nfp: not in enabled drivers build config 00:29:29.592 net/ngbe: not in enabled drivers build config 00:29:29.592 net/null: not in enabled drivers build config 00:29:29.592 net/octeontx: not in enabled drivers build config 00:29:29.592 net/octeon_ep: not in enabled drivers build config 00:29:29.592 net/pcap: not in enabled drivers build config 00:29:29.592 net/pfe: not in enabled drivers build config 00:29:29.592 net/qede: not in enabled drivers build config 00:29:29.592 net/ring: not in enabled drivers build config 00:29:29.592 net/sfc: not in enabled drivers build config 00:29:29.592 net/softnic: not in enabled drivers build config 00:29:29.592 net/tap: not in enabled drivers build config 00:29:29.592 net/thunderx: not in enabled drivers build config 00:29:29.592 net/txgbe: not in enabled drivers build config 00:29:29.592 net/vdev_netvsc: not in enabled drivers build config 00:29:29.592 net/vhost: not in enabled drivers build config 00:29:29.592 net/virtio: not in enabled drivers build config 00:29:29.592 net/vmxnet3: not in enabled drivers build config 00:29:29.592 raw/*: missing internal dependency, "rawdev" 00:29:29.592 crypto/armv8: not in enabled drivers build config 00:29:29.592 crypto/bcmfs: not in enabled drivers build config 00:29:29.592 crypto/caam_jr: not in enabled drivers build config 00:29:29.592 crypto/ccp: not in enabled drivers build config 00:29:29.592 crypto/cnxk: not in enabled drivers build config 00:29:29.592 crypto/dpaa_sec: not in enabled drivers build config 00:29:29.592 crypto/dpaa2_sec: not in enabled drivers build config 00:29:29.592 crypto/ipsec_mb: not in enabled drivers build config 00:29:29.592 crypto/mlx5: not in enabled drivers build config 00:29:29.592 crypto/mvsam: not in enabled drivers build config 00:29:29.592 crypto/nitrox: not in enabled drivers build config 00:29:29.592 crypto/null: not in enabled drivers build config 00:29:29.592 crypto/octeontx: not in enabled drivers build config 00:29:29.592 crypto/openssl: not in enabled drivers build config 00:29:29.592 crypto/scheduler: not in enabled drivers build config 00:29:29.592 crypto/uadk: not in enabled drivers build config 00:29:29.592 crypto/virtio: not in enabled drivers build config 00:29:29.592 compress/isal: not in enabled drivers build config 00:29:29.592 compress/mlx5: not in enabled drivers build config 00:29:29.592 compress/nitrox: not in enabled drivers build config 00:29:29.592 compress/octeontx: not in enabled drivers build config 00:29:29.592 compress/zlib: not in enabled drivers build config 00:29:29.592 regex/*: missing internal dependency, "regexdev" 00:29:29.592 ml/*: missing internal dependency, "mldev" 00:29:29.592 vdpa/ifc: not in enabled drivers build config 00:29:29.592 vdpa/mlx5: not in enabled drivers build config 00:29:29.592 vdpa/nfp: not in enabled drivers build config 00:29:29.592 vdpa/sfc: not in enabled drivers build config 00:29:29.592 event/*: missing internal dependency, "eventdev" 00:29:29.592 baseband/*: missing internal dependency, "bbdev" 00:29:29.592 gpu/*: missing internal dependency, "gpudev" 00:29:29.592 00:29:29.592 00:29:29.592 Build targets in project: 85 00:29:29.592 00:29:29.592 DPDK 24.03.0 00:29:29.592 00:29:29.592 User defined options 00:29:29.592 buildtype : debug 00:29:29.592 default_library : shared 00:29:29.592 libdir : lib 00:29:29.592 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:29:29.592 b_sanitize : address 00:29:29.592 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:29:29.592 c_link_args : 00:29:29.592 cpu_instruction_set: native 00:29:29.592 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:29:29.592 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:29:29.592 enable_docs : false 00:29:29.592 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:29:29.592 enable_kmods : false 00:29:29.592 max_lcores : 128 00:29:29.592 tests : false 00:29:29.592 00:29:29.592 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:29:30.158 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:29:30.158 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:29:30.158 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:29:30.158 [3/268] Linking static target lib/librte_kvargs.a 00:29:30.416 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:29:30.416 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:29:30.416 [6/268] Linking static target lib/librte_log.a 00:29:30.674 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:29:30.933 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:29:30.933 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:29:30.933 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:29:30.933 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:29:31.191 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:29:31.191 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:29:31.191 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:29:31.192 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:29:31.450 [16/268] Linking target lib/librte_log.so.24.1 00:29:31.450 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:29:31.450 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:29:31.450 [19/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:29:31.708 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:29:31.708 [21/268] Linking target lib/librte_kvargs.so.24.1 00:29:31.708 [22/268] Linking static target lib/librte_telemetry.a 00:29:31.708 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:29:31.966 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:29:31.966 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:29:31.966 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:29:31.966 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:29:32.224 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:29:32.224 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:29:32.481 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:29:32.481 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:29:32.481 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:29:32.481 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:29:32.481 [34/268] Linking target lib/librte_telemetry.so.24.1 00:29:32.739 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:29:32.739 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:29:32.739 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:29:32.739 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:29:32.997 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:29:32.997 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:29:32.997 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:29:32.997 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:29:32.997 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:29:32.997 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:29:33.563 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:29:33.563 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:29:33.563 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:29:33.821 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:29:33.821 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:29:33.821 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:29:33.821 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:29:33.821 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:29:34.079 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:29:34.079 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:29:34.079 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:29:34.337 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:29:34.595 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:29:34.595 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:29:34.595 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:29:34.595 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:29:34.853 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:29:34.853 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:29:34.853 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:29:34.853 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:29:35.110 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:29:35.110 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:29:35.110 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:29:35.110 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:29:35.368 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:29:35.368 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:29:35.368 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:29:35.625 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:29:35.625 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:29:35.625 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:29:35.625 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:29:35.625 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:29:35.884 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:29:35.884 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:29:35.884 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:29:35.884 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:29:36.140 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:29:36.140 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:29:36.398 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:29:36.398 [84/268] Linking static target lib/librte_ring.a 00:29:36.398 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:29:36.656 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:29:36.656 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:29:36.656 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:29:36.656 [89/268] Linking static target lib/librte_mempool.a 00:29:36.656 [90/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:29:36.656 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:29:36.656 [92/268] Linking static target lib/librte_eal.a 00:29:36.914 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:29:36.914 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:29:37.180 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:29:37.180 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:29:37.180 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:29:37.180 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:29:37.180 [99/268] Linking static target lib/librte_rcu.a 00:29:37.438 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:29:37.438 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:29:37.697 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:29:37.697 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:29:37.697 [104/268] Linking static target lib/librte_mbuf.a 00:29:37.697 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:29:37.954 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:29:37.954 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:29:37.954 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:29:37.954 [109/268] Linking static target lib/librte_net.a 00:29:37.954 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:29:37.954 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:29:37.954 [112/268] Linking static target lib/librte_meter.a 00:29:38.518 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:29:38.518 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:29:38.518 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:29:38.518 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:29:38.518 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:29:38.518 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:29:38.775 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:29:39.033 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:29:39.033 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:29:39.599 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:29:39.599 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:29:39.599 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:29:39.599 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:29:39.858 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:29:39.858 [127/268] Linking static target lib/librte_pci.a 00:29:39.858 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:29:39.858 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:29:39.858 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:29:40.115 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:29:40.115 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:29:40.115 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:29:40.115 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:29:40.115 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:29:40.423 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:29:40.423 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:29:40.423 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:29:40.423 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:29:40.423 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:29:40.423 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:29:40.423 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:29:40.423 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:29:40.423 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:29:40.681 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:29:40.681 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:29:40.681 [147/268] Linking static target lib/librte_cmdline.a 00:29:41.246 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:29:41.246 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:29:41.246 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:29:41.504 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:29:41.504 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:29:41.504 [153/268] Linking static target lib/librte_timer.a 00:29:41.504 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:29:41.504 [155/268] Linking static target lib/librte_ethdev.a 00:29:41.762 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:29:41.762 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:29:42.327 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:29:42.327 [159/268] Linking static target lib/librte_compressdev.a 00:29:42.327 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:29:42.327 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:29:42.327 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:29:42.327 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:29:42.327 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:29:42.327 [165/268] Linking static target lib/librte_hash.a 00:29:42.584 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:29:42.584 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:29:42.584 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:29:42.841 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:29:42.841 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:29:42.841 [171/268] Linking static target lib/librte_dmadev.a 00:29:43.099 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:29:43.099 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:29:43.099 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:43.357 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:29:43.614 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:29:43.614 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:29:43.614 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:29:43.614 [179/268] Linking static target lib/librte_cryptodev.a 00:29:43.872 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:29:43.872 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:29:43.872 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:29:43.872 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:43.872 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:29:44.130 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:29:44.130 [186/268] Linking static target lib/librte_power.a 00:29:44.695 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:29:44.695 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:29:44.695 [189/268] Linking static target lib/librte_reorder.a 00:29:44.695 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:29:44.695 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:29:44.695 [192/268] Linking static target lib/librte_security.a 00:29:44.953 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:29:45.210 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:29:45.210 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:29:45.467 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:29:45.468 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:29:45.725 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:29:45.983 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:29:45.983 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:29:45.983 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:29:45.983 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:29:46.241 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:46.498 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:29:46.755 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:29:46.755 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:29:46.755 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:29:46.755 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:29:46.755 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:29:46.755 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:29:46.755 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:29:47.013 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:29:47.013 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:29:47.013 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:29:47.013 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:29:47.013 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:29:47.013 [217/268] Linking static target drivers/librte_bus_vdev.a 00:29:47.013 [218/268] Linking static target drivers/librte_bus_pci.a 00:29:47.270 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:29:47.578 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:29:47.579 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:29:47.579 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:47.579 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:29:47.579 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:29:47.579 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:29:47.579 [226/268] Linking static target drivers/librte_mempool_ring.a 00:29:47.836 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:29:48.400 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:29:48.965 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:29:48.965 [230/268] Linking target lib/librte_eal.so.24.1 00:29:48.965 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:29:48.965 [232/268] Linking target lib/librte_ring.so.24.1 00:29:48.965 [233/268] Linking target lib/librte_dmadev.so.24.1 00:29:48.965 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:29:48.965 [235/268] Linking target lib/librte_timer.so.24.1 00:29:49.223 [236/268] Linking target lib/librte_pci.so.24.1 00:29:49.223 [237/268] Linking target lib/librte_meter.so.24.1 00:29:49.223 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:29:49.223 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:29:49.223 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:29:49.223 [241/268] Linking target lib/librte_rcu.so.24.1 00:29:49.223 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:29:49.223 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:29:49.223 [244/268] Linking target lib/librte_mempool.so.24.1 00:29:49.223 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:29:49.482 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:29:49.482 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:29:49.482 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:29:49.482 [249/268] Linking target lib/librte_mbuf.so.24.1 00:29:49.741 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:29:49.741 [251/268] Linking target lib/librte_reorder.so.24.1 00:29:49.741 [252/268] Linking target lib/librte_net.so.24.1 00:29:49.741 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:29:49.741 [254/268] Linking target lib/librte_compressdev.so.24.1 00:29:49.741 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:29:49.741 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:29:49.741 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:49.741 [258/268] Linking target lib/librte_hash.so.24.1 00:29:49.741 [259/268] Linking target lib/librte_cmdline.so.24.1 00:29:49.741 [260/268] Linking target lib/librte_security.so.24.1 00:29:50.000 [261/268] Linking target lib/librte_ethdev.so.24.1 00:29:50.000 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:29:50.000 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:29:50.258 [264/268] Linking target lib/librte_power.so.24.1 00:29:52.788 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:29:52.788 [266/268] Linking static target lib/librte_vhost.a 00:29:54.176 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:29:54.176 [268/268] Linking target lib/librte_vhost.so.24.1 00:29:54.176 INFO: autodetecting backend as ninja 00:29:54.176 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:30:16.136 CC lib/ut/ut.o 00:30:16.136 CC lib/log/log.o 00:30:16.136 CC lib/log/log_flags.o 00:30:16.136 CC lib/log/log_deprecated.o 00:30:16.136 CC lib/ut_mock/mock.o 00:30:16.136 LIB libspdk_ut.a 00:30:16.136 LIB libspdk_log.a 00:30:16.136 LIB libspdk_ut_mock.a 00:30:16.136 SO libspdk_ut_mock.so.6.0 00:30:16.136 SO libspdk_ut.so.2.0 00:30:16.136 SO libspdk_log.so.7.1 00:30:16.136 SYMLINK libspdk_ut_mock.so 00:30:16.136 SYMLINK libspdk_ut.so 00:30:16.136 SYMLINK libspdk_log.so 00:30:16.136 CC lib/util/base64.o 00:30:16.136 CC lib/dma/dma.o 00:30:16.136 CC lib/util/cpuset.o 00:30:16.136 CC lib/util/bit_array.o 00:30:16.136 CC lib/util/crc16.o 00:30:16.136 CC lib/util/crc32.o 00:30:16.136 CXX lib/trace_parser/trace.o 00:30:16.136 CC lib/util/crc32c.o 00:30:16.136 CC lib/ioat/ioat.o 00:30:16.136 CC lib/vfio_user/host/vfio_user_pci.o 00:30:16.136 CC lib/util/crc32_ieee.o 00:30:16.136 CC lib/util/crc64.o 00:30:16.136 CC lib/util/dif.o 00:30:16.136 CC lib/util/fd.o 00:30:16.136 CC lib/util/fd_group.o 00:30:16.136 LIB libspdk_dma.a 00:30:16.136 CC lib/util/file.o 00:30:16.136 SO libspdk_dma.so.5.0 00:30:16.136 CC lib/util/hexlify.o 00:30:16.136 CC lib/util/iov.o 00:30:16.136 SYMLINK libspdk_dma.so 00:30:16.136 CC lib/util/math.o 00:30:16.136 LIB libspdk_ioat.a 00:30:16.136 CC lib/util/net.o 00:30:16.136 SO libspdk_ioat.so.7.0 00:30:16.136 CC lib/vfio_user/host/vfio_user.o 00:30:16.136 CC lib/util/pipe.o 00:30:16.136 CC lib/util/strerror_tls.o 00:30:16.136 SYMLINK libspdk_ioat.so 00:30:16.136 CC lib/util/string.o 00:30:16.136 CC lib/util/uuid.o 00:30:16.136 CC lib/util/xor.o 00:30:16.136 CC lib/util/zipf.o 00:30:16.136 CC lib/util/md5.o 00:30:16.136 LIB libspdk_vfio_user.a 00:30:16.136 SO libspdk_vfio_user.so.5.0 00:30:16.136 SYMLINK libspdk_vfio_user.so 00:30:16.136 LIB libspdk_util.a 00:30:16.136 SO libspdk_util.so.10.1 00:30:16.136 LIB libspdk_trace_parser.a 00:30:16.136 SO libspdk_trace_parser.so.6.0 00:30:16.136 SYMLINK libspdk_util.so 00:30:16.136 SYMLINK libspdk_trace_parser.so 00:30:16.395 CC lib/conf/conf.o 00:30:16.395 CC lib/json/json_parse.o 00:30:16.395 CC lib/json/json_util.o 00:30:16.395 CC lib/json/json_write.o 00:30:16.395 CC lib/env_dpdk/memory.o 00:30:16.395 CC lib/env_dpdk/env.o 00:30:16.395 CC lib/rdma_utils/rdma_utils.o 00:30:16.395 CC lib/env_dpdk/pci.o 00:30:16.395 CC lib/vmd/vmd.o 00:30:16.395 CC lib/idxd/idxd.o 00:30:16.654 CC lib/vmd/led.o 00:30:16.654 LIB libspdk_conf.a 00:30:16.654 CC lib/env_dpdk/init.o 00:30:16.654 LIB libspdk_rdma_utils.a 00:30:16.654 SO libspdk_conf.so.6.0 00:30:16.654 LIB libspdk_json.a 00:30:16.654 SO libspdk_rdma_utils.so.1.0 00:30:16.913 SO libspdk_json.so.6.0 00:30:16.913 SYMLINK libspdk_conf.so 00:30:16.913 SYMLINK libspdk_rdma_utils.so 00:30:16.913 CC lib/idxd/idxd_user.o 00:30:16.913 SYMLINK libspdk_json.so 00:30:16.913 CC lib/idxd/idxd_kernel.o 00:30:16.913 CC lib/env_dpdk/threads.o 00:30:16.913 CC lib/env_dpdk/pci_ioat.o 00:30:16.913 CC lib/rdma_provider/common.o 00:30:16.913 CC lib/rdma_provider/rdma_provider_verbs.o 00:30:17.171 CC lib/env_dpdk/pci_virtio.o 00:30:17.171 CC lib/env_dpdk/pci_vmd.o 00:30:17.171 CC lib/env_dpdk/pci_idxd.o 00:30:17.171 CC lib/env_dpdk/pci_event.o 00:30:17.171 CC lib/env_dpdk/sigbus_handler.o 00:30:17.171 CC lib/env_dpdk/pci_dpdk.o 00:30:17.171 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:17.171 CC lib/jsonrpc/jsonrpc_server.o 00:30:17.171 LIB libspdk_rdma_provider.a 00:30:17.429 SO libspdk_rdma_provider.so.7.0 00:30:17.429 LIB libspdk_vmd.a 00:30:17.429 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:17.429 SYMLINK libspdk_rdma_provider.so 00:30:17.429 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:17.429 CC lib/jsonrpc/jsonrpc_client.o 00:30:17.429 SO libspdk_vmd.so.6.0 00:30:17.429 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:17.429 LIB libspdk_idxd.a 00:30:17.429 SYMLINK libspdk_vmd.so 00:30:17.429 SO libspdk_idxd.so.12.1 00:30:17.688 SYMLINK libspdk_idxd.so 00:30:17.688 LIB libspdk_jsonrpc.a 00:30:17.688 SO libspdk_jsonrpc.so.6.0 00:30:17.688 SYMLINK libspdk_jsonrpc.so 00:30:18.264 CC lib/rpc/rpc.o 00:30:18.264 LIB libspdk_rpc.a 00:30:18.264 LIB libspdk_env_dpdk.a 00:30:18.264 SO libspdk_rpc.so.6.0 00:30:18.526 SYMLINK libspdk_rpc.so 00:30:18.526 SO libspdk_env_dpdk.so.15.1 00:30:18.526 SYMLINK libspdk_env_dpdk.so 00:30:18.526 CC lib/trace/trace.o 00:30:18.526 CC lib/trace/trace_rpc.o 00:30:18.526 CC lib/keyring/keyring.o 00:30:18.526 CC lib/keyring/keyring_rpc.o 00:30:18.526 CC lib/notify/notify_rpc.o 00:30:18.526 CC lib/trace/trace_flags.o 00:30:18.798 CC lib/notify/notify.o 00:30:18.798 LIB libspdk_notify.a 00:30:18.798 SO libspdk_notify.so.6.0 00:30:19.057 SYMLINK libspdk_notify.so 00:30:19.057 LIB libspdk_keyring.a 00:30:19.057 LIB libspdk_trace.a 00:30:19.057 SO libspdk_keyring.so.2.0 00:30:19.057 SO libspdk_trace.so.11.0 00:30:19.057 SYMLINK libspdk_keyring.so 00:30:19.057 SYMLINK libspdk_trace.so 00:30:19.315 CC lib/sock/sock.o 00:30:19.315 CC lib/sock/sock_rpc.o 00:30:19.315 CC lib/thread/thread.o 00:30:19.315 CC lib/thread/iobuf.o 00:30:20.251 LIB libspdk_sock.a 00:30:20.251 SO libspdk_sock.so.10.0 00:30:20.251 SYMLINK libspdk_sock.so 00:30:20.509 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:20.509 CC lib/nvme/nvme_ctrlr.o 00:30:20.509 CC lib/nvme/nvme_fabric.o 00:30:20.509 CC lib/nvme/nvme_ns_cmd.o 00:30:20.509 CC lib/nvme/nvme_ns.o 00:30:20.509 CC lib/nvme/nvme_qpair.o 00:30:20.509 CC lib/nvme/nvme_pcie_common.o 00:30:20.509 CC lib/nvme/nvme_pcie.o 00:30:20.509 CC lib/nvme/nvme.o 00:30:21.468 LIB libspdk_thread.a 00:30:21.468 CC lib/nvme/nvme_quirks.o 00:30:21.468 CC lib/nvme/nvme_transport.o 00:30:21.468 SO libspdk_thread.so.11.0 00:30:21.468 CC lib/nvme/nvme_discovery.o 00:30:21.468 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:21.468 SYMLINK libspdk_thread.so 00:30:21.468 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:21.468 CC lib/nvme/nvme_tcp.o 00:30:21.468 CC lib/nvme/nvme_opal.o 00:30:21.727 CC lib/nvme/nvme_io_msg.o 00:30:21.727 CC lib/nvme/nvme_poll_group.o 00:30:21.986 CC lib/nvme/nvme_zns.o 00:30:21.986 CC lib/nvme/nvme_stubs.o 00:30:22.245 CC lib/nvme/nvme_auth.o 00:30:22.245 CC lib/nvme/nvme_cuse.o 00:30:22.245 CC lib/accel/accel.o 00:30:22.245 CC lib/nvme/nvme_rdma.o 00:30:22.504 CC lib/blob/blobstore.o 00:30:22.504 CC lib/blob/request.o 00:30:22.762 CC lib/blob/zeroes.o 00:30:22.762 CC lib/blob/blob_bs_dev.o 00:30:23.020 CC lib/init/json_config.o 00:30:23.020 CC lib/accel/accel_rpc.o 00:30:23.277 CC lib/accel/accel_sw.o 00:30:23.277 CC lib/init/subsystem.o 00:30:23.277 CC lib/init/subsystem_rpc.o 00:30:23.277 CC lib/init/rpc.o 00:30:23.535 CC lib/virtio/virtio_vhost_user.o 00:30:23.535 CC lib/virtio/virtio_pci.o 00:30:23.535 CC lib/virtio/virtio.o 00:30:23.535 CC lib/virtio/virtio_vfio_user.o 00:30:23.535 LIB libspdk_init.a 00:30:23.535 SO libspdk_init.so.6.0 00:30:23.535 CC lib/fsdev/fsdev.o 00:30:23.535 CC lib/fsdev/fsdev_io.o 00:30:23.535 LIB libspdk_accel.a 00:30:23.792 SYMLINK libspdk_init.so 00:30:23.792 CC lib/fsdev/fsdev_rpc.o 00:30:23.792 SO libspdk_accel.so.16.0 00:30:23.792 SYMLINK libspdk_accel.so 00:30:23.792 LIB libspdk_virtio.a 00:30:23.792 CC lib/event/app.o 00:30:23.792 CC lib/event/reactor.o 00:30:23.792 CC lib/event/app_rpc.o 00:30:23.792 CC lib/event/log_rpc.o 00:30:23.792 CC lib/bdev/bdev.o 00:30:24.049 SO libspdk_virtio.so.7.0 00:30:24.049 LIB libspdk_nvme.a 00:30:24.049 SYMLINK libspdk_virtio.so 00:30:24.049 CC lib/event/scheduler_static.o 00:30:24.049 CC lib/bdev/bdev_rpc.o 00:30:24.049 CC lib/bdev/bdev_zone.o 00:30:24.306 CC lib/bdev/part.o 00:30:24.306 CC lib/bdev/scsi_nvme.o 00:30:24.306 SO libspdk_nvme.so.15.0 00:30:24.306 LIB libspdk_fsdev.a 00:30:24.306 SO libspdk_fsdev.so.2.0 00:30:24.564 SYMLINK libspdk_fsdev.so 00:30:24.564 LIB libspdk_event.a 00:30:24.564 SYMLINK libspdk_nvme.so 00:30:24.564 SO libspdk_event.so.14.0 00:30:24.564 SYMLINK libspdk_event.so 00:30:24.823 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:30:25.757 LIB libspdk_fuse_dispatcher.a 00:30:25.757 SO libspdk_fuse_dispatcher.so.1.0 00:30:25.757 SYMLINK libspdk_fuse_dispatcher.so 00:30:26.693 LIB libspdk_blob.a 00:30:26.693 SO libspdk_blob.so.12.0 00:30:26.953 SYMLINK libspdk_blob.so 00:30:27.211 CC lib/blobfs/blobfs.o 00:30:27.211 CC lib/blobfs/tree.o 00:30:27.211 CC lib/lvol/lvol.o 00:30:27.470 LIB libspdk_bdev.a 00:30:27.470 SO libspdk_bdev.so.17.0 00:30:27.470 SYMLINK libspdk_bdev.so 00:30:27.727 CC lib/scsi/dev.o 00:30:27.727 CC lib/scsi/lun.o 00:30:27.727 CC lib/scsi/port.o 00:30:27.727 CC lib/scsi/scsi.o 00:30:27.727 CC lib/nbd/nbd.o 00:30:27.727 CC lib/ublk/ublk.o 00:30:27.727 CC lib/ftl/ftl_core.o 00:30:27.727 CC lib/nvmf/ctrlr.o 00:30:28.034 CC lib/scsi/scsi_bdev.o 00:30:28.034 CC lib/ftl/ftl_init.o 00:30:28.034 CC lib/scsi/scsi_pr.o 00:30:28.034 CC lib/ftl/ftl_layout.o 00:30:28.305 LIB libspdk_blobfs.a 00:30:28.306 CC lib/ftl/ftl_debug.o 00:30:28.306 SO libspdk_blobfs.so.11.0 00:30:28.306 CC lib/nbd/nbd_rpc.o 00:30:28.306 CC lib/scsi/scsi_rpc.o 00:30:28.306 SYMLINK libspdk_blobfs.so 00:30:28.306 LIB libspdk_lvol.a 00:30:28.306 CC lib/nvmf/ctrlr_discovery.o 00:30:28.306 SO libspdk_lvol.so.11.0 00:30:28.563 SYMLINK libspdk_lvol.so 00:30:28.563 CC lib/ublk/ublk_rpc.o 00:30:28.563 CC lib/ftl/ftl_io.o 00:30:28.563 LIB libspdk_nbd.a 00:30:28.563 CC lib/ftl/ftl_sb.o 00:30:28.563 CC lib/nvmf/ctrlr_bdev.o 00:30:28.563 SO libspdk_nbd.so.7.0 00:30:28.563 CC lib/ftl/ftl_l2p.o 00:30:28.563 SYMLINK libspdk_nbd.so 00:30:28.563 CC lib/ftl/ftl_l2p_flat.o 00:30:28.563 CC lib/ftl/ftl_nv_cache.o 00:30:28.563 CC lib/scsi/task.o 00:30:28.563 LIB libspdk_ublk.a 00:30:28.821 SO libspdk_ublk.so.3.0 00:30:28.821 CC lib/nvmf/subsystem.o 00:30:28.821 SYMLINK libspdk_ublk.so 00:30:28.821 CC lib/ftl/ftl_band.o 00:30:28.821 CC lib/nvmf/nvmf.o 00:30:28.821 CC lib/nvmf/nvmf_rpc.o 00:30:28.821 CC lib/ftl/ftl_band_ops.o 00:30:28.821 LIB libspdk_scsi.a 00:30:29.080 CC lib/nvmf/transport.o 00:30:29.080 SO libspdk_scsi.so.9.0 00:30:29.080 SYMLINK libspdk_scsi.so 00:30:29.080 CC lib/ftl/ftl_writer.o 00:30:29.338 CC lib/nvmf/tcp.o 00:30:29.338 CC lib/nvmf/stubs.o 00:30:29.338 CC lib/nvmf/mdns_server.o 00:30:29.338 CC lib/nvmf/rdma.o 00:30:29.904 CC lib/nvmf/auth.o 00:30:29.905 CC lib/ftl/ftl_rq.o 00:30:29.905 CC lib/ftl/ftl_reloc.o 00:30:29.905 CC lib/ftl/ftl_l2p_cache.o 00:30:30.163 CC lib/iscsi/conn.o 00:30:30.163 CC lib/vhost/vhost.o 00:30:30.163 CC lib/ftl/ftl_p2l.o 00:30:30.421 CC lib/ftl/ftl_p2l_log.o 00:30:30.421 CC lib/ftl/mngt/ftl_mngt.o 00:30:30.421 CC lib/iscsi/init_grp.o 00:30:30.679 CC lib/iscsi/iscsi.o 00:30:30.679 CC lib/iscsi/param.o 00:30:30.679 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:30:30.679 CC lib/vhost/vhost_rpc.o 00:30:30.679 CC lib/iscsi/portal_grp.o 00:30:30.937 CC lib/iscsi/tgt_node.o 00:30:30.937 CC lib/iscsi/iscsi_subsystem.o 00:30:30.937 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:30:31.195 CC lib/iscsi/iscsi_rpc.o 00:30:31.195 CC lib/ftl/mngt/ftl_mngt_startup.o 00:30:31.195 CC lib/iscsi/task.o 00:30:31.195 CC lib/vhost/vhost_scsi.o 00:30:31.453 CC lib/ftl/mngt/ftl_mngt_md.o 00:30:31.453 CC lib/ftl/mngt/ftl_mngt_misc.o 00:30:31.453 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:30:31.453 CC lib/vhost/vhost_blk.o 00:30:31.453 CC lib/vhost/rte_vhost_user.o 00:30:31.453 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:30:31.453 CC lib/ftl/mngt/ftl_mngt_band.o 00:30:31.711 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:30:31.711 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:30:31.711 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:30:31.711 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:30:31.969 CC lib/ftl/utils/ftl_conf.o 00:30:31.969 CC lib/ftl/utils/ftl_md.o 00:30:31.969 CC lib/ftl/utils/ftl_mempool.o 00:30:31.969 CC lib/ftl/utils/ftl_bitmap.o 00:30:31.969 CC lib/ftl/utils/ftl_property.o 00:30:32.226 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:30:32.226 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:30:32.226 LIB libspdk_nvmf.a 00:30:32.226 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:30:32.226 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:30:32.484 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:30:32.484 SO libspdk_nvmf.so.20.0 00:30:32.484 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:30:32.484 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:30:32.484 CC lib/ftl/upgrade/ftl_sb_v3.o 00:30:32.484 LIB libspdk_iscsi.a 00:30:32.484 CC lib/ftl/upgrade/ftl_sb_v5.o 00:30:32.484 CC lib/ftl/nvc/ftl_nvc_dev.o 00:30:32.484 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:30:32.742 SO libspdk_iscsi.so.8.0 00:30:32.742 SYMLINK libspdk_nvmf.so 00:30:32.742 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:30:32.742 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:30:32.742 CC lib/ftl/base/ftl_base_dev.o 00:30:32.742 CC lib/ftl/base/ftl_base_bdev.o 00:30:32.742 CC lib/ftl/ftl_trace.o 00:30:32.742 LIB libspdk_vhost.a 00:30:32.742 SO libspdk_vhost.so.8.0 00:30:32.742 SYMLINK libspdk_iscsi.so 00:30:33.001 SYMLINK libspdk_vhost.so 00:30:33.001 LIB libspdk_ftl.a 00:30:33.260 SO libspdk_ftl.so.9.0 00:30:33.518 SYMLINK libspdk_ftl.so 00:30:34.084 CC module/env_dpdk/env_dpdk_rpc.o 00:30:34.084 CC module/scheduler/dynamic/scheduler_dynamic.o 00:30:34.084 CC module/accel/iaa/accel_iaa.o 00:30:34.084 CC module/sock/posix/posix.o 00:30:34.084 CC module/accel/ioat/accel_ioat.o 00:30:34.084 CC module/accel/dsa/accel_dsa.o 00:30:34.084 CC module/accel/error/accel_error.o 00:30:34.084 CC module/keyring/file/keyring.o 00:30:34.084 CC module/fsdev/aio/fsdev_aio.o 00:30:34.084 CC module/blob/bdev/blob_bdev.o 00:30:34.084 LIB libspdk_env_dpdk_rpc.a 00:30:34.084 SO libspdk_env_dpdk_rpc.so.6.0 00:30:34.084 SYMLINK libspdk_env_dpdk_rpc.so 00:30:34.084 CC module/accel/ioat/accel_ioat_rpc.o 00:30:34.084 CC module/keyring/file/keyring_rpc.o 00:30:34.343 LIB libspdk_scheduler_dynamic.a 00:30:34.343 CC module/fsdev/aio/fsdev_aio_rpc.o 00:30:34.343 CC module/accel/iaa/accel_iaa_rpc.o 00:30:34.343 SO libspdk_scheduler_dynamic.so.4.0 00:30:34.343 CC module/accel/error/accel_error_rpc.o 00:30:34.343 LIB libspdk_accel_ioat.a 00:30:34.343 SYMLINK libspdk_scheduler_dynamic.so 00:30:34.343 LIB libspdk_keyring_file.a 00:30:34.343 SO libspdk_accel_ioat.so.6.0 00:30:34.343 LIB libspdk_blob_bdev.a 00:30:34.343 CC module/accel/dsa/accel_dsa_rpc.o 00:30:34.343 SO libspdk_keyring_file.so.2.0 00:30:34.343 SO libspdk_blob_bdev.so.12.0 00:30:34.343 LIB libspdk_accel_iaa.a 00:30:34.343 CC module/fsdev/aio/linux_aio_mgr.o 00:30:34.343 SO libspdk_accel_iaa.so.3.0 00:30:34.343 SYMLINK libspdk_accel_ioat.so 00:30:34.343 LIB libspdk_accel_error.a 00:30:34.343 SYMLINK libspdk_keyring_file.so 00:30:34.656 SYMLINK libspdk_blob_bdev.so 00:30:34.656 SO libspdk_accel_error.so.2.0 00:30:34.656 SYMLINK libspdk_accel_iaa.so 00:30:34.656 LIB libspdk_accel_dsa.a 00:30:34.656 SYMLINK libspdk_accel_error.so 00:30:34.656 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:30:34.656 SO libspdk_accel_dsa.so.5.0 00:30:34.656 SYMLINK libspdk_accel_dsa.so 00:30:34.656 CC module/keyring/linux/keyring.o 00:30:34.656 CC module/scheduler/gscheduler/gscheduler.o 00:30:34.656 LIB libspdk_scheduler_dpdk_governor.a 00:30:34.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:30:34.929 CC module/bdev/error/vbdev_error.o 00:30:34.929 CC module/bdev/delay/vbdev_delay.o 00:30:34.929 CC module/keyring/linux/keyring_rpc.o 00:30:34.929 CC module/bdev/gpt/gpt.o 00:30:34.929 CC module/bdev/lvol/vbdev_lvol.o 00:30:34.929 CC module/blobfs/bdev/blobfs_bdev.o 00:30:34.929 SYMLINK libspdk_scheduler_dpdk_governor.so 00:30:34.929 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:30:34.929 LIB libspdk_scheduler_gscheduler.a 00:30:34.929 SO libspdk_scheduler_gscheduler.so.4.0 00:30:34.929 LIB libspdk_fsdev_aio.a 00:30:34.929 LIB libspdk_sock_posix.a 00:30:34.929 SYMLINK libspdk_scheduler_gscheduler.so 00:30:34.929 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:30:34.929 SO libspdk_fsdev_aio.so.1.0 00:30:34.929 LIB libspdk_keyring_linux.a 00:30:34.929 SO libspdk_sock_posix.so.6.0 00:30:34.929 SO libspdk_keyring_linux.so.1.0 00:30:35.187 SYMLINK libspdk_fsdev_aio.so 00:30:35.187 CC module/bdev/delay/vbdev_delay_rpc.o 00:30:35.187 CC module/bdev/gpt/vbdev_gpt.o 00:30:35.187 SYMLINK libspdk_sock_posix.so 00:30:35.187 SYMLINK libspdk_keyring_linux.so 00:30:35.187 CC module/bdev/error/vbdev_error_rpc.o 00:30:35.187 LIB libspdk_blobfs_bdev.a 00:30:35.187 SO libspdk_blobfs_bdev.so.6.0 00:30:35.187 CC module/bdev/malloc/bdev_malloc.o 00:30:35.187 CC module/bdev/null/bdev_null.o 00:30:35.187 CC module/bdev/malloc/bdev_malloc_rpc.o 00:30:35.187 SYMLINK libspdk_blobfs_bdev.so 00:30:35.187 LIB libspdk_bdev_delay.a 00:30:35.187 LIB libspdk_bdev_error.a 00:30:35.446 CC module/bdev/nvme/bdev_nvme.o 00:30:35.446 SO libspdk_bdev_delay.so.6.0 00:30:35.446 SO libspdk_bdev_error.so.6.0 00:30:35.446 CC module/bdev/nvme/bdev_nvme_rpc.o 00:30:35.446 SYMLINK libspdk_bdev_delay.so 00:30:35.446 CC module/bdev/null/bdev_null_rpc.o 00:30:35.446 LIB libspdk_bdev_gpt.a 00:30:35.446 SYMLINK libspdk_bdev_error.so 00:30:35.446 SO libspdk_bdev_gpt.so.6.0 00:30:35.446 CC module/bdev/passthru/vbdev_passthru.o 00:30:35.446 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:30:35.446 SYMLINK libspdk_bdev_gpt.so 00:30:35.446 LIB libspdk_bdev_lvol.a 00:30:35.446 SO libspdk_bdev_lvol.so.6.0 00:30:35.703 LIB libspdk_bdev_null.a 00:30:35.703 CC module/bdev/nvme/nvme_rpc.o 00:30:35.703 CC module/bdev/raid/bdev_raid.o 00:30:35.703 SO libspdk_bdev_null.so.6.0 00:30:35.703 SYMLINK libspdk_bdev_lvol.so 00:30:35.703 SYMLINK libspdk_bdev_null.so 00:30:35.703 CC module/bdev/split/vbdev_split.o 00:30:35.703 CC module/bdev/split/vbdev_split_rpc.o 00:30:35.703 LIB libspdk_bdev_malloc.a 00:30:35.703 SO libspdk_bdev_malloc.so.6.0 00:30:35.961 CC module/bdev/zone_block/vbdev_zone_block.o 00:30:35.961 CC module/bdev/aio/bdev_aio.o 00:30:35.961 LIB libspdk_bdev_passthru.a 00:30:35.961 SYMLINK libspdk_bdev_malloc.so 00:30:35.961 SO libspdk_bdev_passthru.so.6.0 00:30:35.961 SYMLINK libspdk_bdev_passthru.so 00:30:35.961 LIB libspdk_bdev_split.a 00:30:35.961 CC module/bdev/aio/bdev_aio_rpc.o 00:30:35.961 SO libspdk_bdev_split.so.6.0 00:30:35.961 CC module/bdev/ftl/bdev_ftl.o 00:30:35.961 CC module/bdev/iscsi/bdev_iscsi.o 00:30:36.219 CC module/bdev/virtio/bdev_virtio_scsi.o 00:30:36.219 SYMLINK libspdk_bdev_split.so 00:30:36.219 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:30:36.219 CC module/bdev/ftl/bdev_ftl_rpc.o 00:30:36.219 CC module/bdev/nvme/bdev_mdns_client.o 00:30:36.219 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:30:36.219 LIB libspdk_bdev_aio.a 00:30:36.219 CC module/bdev/nvme/vbdev_opal.o 00:30:36.219 SO libspdk_bdev_aio.so.6.0 00:30:36.477 CC module/bdev/virtio/bdev_virtio_blk.o 00:30:36.477 SYMLINK libspdk_bdev_aio.so 00:30:36.477 CC module/bdev/virtio/bdev_virtio_rpc.o 00:30:36.477 CC module/bdev/nvme/vbdev_opal_rpc.o 00:30:36.477 LIB libspdk_bdev_ftl.a 00:30:36.477 LIB libspdk_bdev_zone_block.a 00:30:36.477 SO libspdk_bdev_ftl.so.6.0 00:30:36.477 LIB libspdk_bdev_iscsi.a 00:30:36.477 SO libspdk_bdev_zone_block.so.6.0 00:30:36.477 SO libspdk_bdev_iscsi.so.6.0 00:30:36.477 SYMLINK libspdk_bdev_ftl.so 00:30:36.735 SYMLINK libspdk_bdev_zone_block.so 00:30:36.735 CC module/bdev/raid/bdev_raid_rpc.o 00:30:36.735 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:30:36.735 CC module/bdev/raid/bdev_raid_sb.o 00:30:36.735 CC module/bdev/raid/raid0.o 00:30:36.735 SYMLINK libspdk_bdev_iscsi.so 00:30:36.735 CC module/bdev/raid/raid1.o 00:30:36.735 CC module/bdev/raid/concat.o 00:30:36.735 CC module/bdev/raid/raid5f.o 00:30:36.735 LIB libspdk_bdev_virtio.a 00:30:36.735 SO libspdk_bdev_virtio.so.6.0 00:30:36.735 SYMLINK libspdk_bdev_virtio.so 00:30:37.303 LIB libspdk_bdev_raid.a 00:30:37.303 SO libspdk_bdev_raid.so.6.0 00:30:37.562 SYMLINK libspdk_bdev_raid.so 00:30:38.496 LIB libspdk_bdev_nvme.a 00:30:38.753 SO libspdk_bdev_nvme.so.7.1 00:30:38.753 SYMLINK libspdk_bdev_nvme.so 00:30:39.320 CC module/event/subsystems/iobuf/iobuf.o 00:30:39.320 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:30:39.320 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:30:39.320 CC module/event/subsystems/scheduler/scheduler.o 00:30:39.320 CC module/event/subsystems/keyring/keyring.o 00:30:39.320 CC module/event/subsystems/sock/sock.o 00:30:39.320 CC module/event/subsystems/fsdev/fsdev.o 00:30:39.320 CC module/event/subsystems/vmd/vmd.o 00:30:39.320 CC module/event/subsystems/vmd/vmd_rpc.o 00:30:39.578 LIB libspdk_event_scheduler.a 00:30:39.578 LIB libspdk_event_keyring.a 00:30:39.578 LIB libspdk_event_vhost_blk.a 00:30:39.578 LIB libspdk_event_fsdev.a 00:30:39.578 LIB libspdk_event_sock.a 00:30:39.578 LIB libspdk_event_vmd.a 00:30:39.578 SO libspdk_event_scheduler.so.4.0 00:30:39.578 LIB libspdk_event_iobuf.a 00:30:39.578 SO libspdk_event_keyring.so.1.0 00:30:39.578 SO libspdk_event_vhost_blk.so.3.0 00:30:39.578 SO libspdk_event_fsdev.so.1.0 00:30:39.578 SO libspdk_event_sock.so.5.0 00:30:39.578 SO libspdk_event_vmd.so.6.0 00:30:39.578 SO libspdk_event_iobuf.so.3.0 00:30:39.578 SYMLINK libspdk_event_scheduler.so 00:30:39.578 SYMLINK libspdk_event_keyring.so 00:30:39.578 SYMLINK libspdk_event_vhost_blk.so 00:30:39.578 SYMLINK libspdk_event_fsdev.so 00:30:39.578 SYMLINK libspdk_event_sock.so 00:30:39.578 SYMLINK libspdk_event_vmd.so 00:30:39.578 SYMLINK libspdk_event_iobuf.so 00:30:39.836 CC module/event/subsystems/accel/accel.o 00:30:40.093 LIB libspdk_event_accel.a 00:30:40.094 SO libspdk_event_accel.so.6.0 00:30:40.094 SYMLINK libspdk_event_accel.so 00:30:40.659 CC module/event/subsystems/bdev/bdev.o 00:30:40.917 LIB libspdk_event_bdev.a 00:30:40.918 SO libspdk_event_bdev.so.6.0 00:30:40.918 SYMLINK libspdk_event_bdev.so 00:30:41.176 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:30:41.176 CC module/event/subsystems/nbd/nbd.o 00:30:41.176 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:30:41.176 CC module/event/subsystems/scsi/scsi.o 00:30:41.176 CC module/event/subsystems/ublk/ublk.o 00:30:41.434 LIB libspdk_event_nbd.a 00:30:41.434 LIB libspdk_event_ublk.a 00:30:41.434 LIB libspdk_event_scsi.a 00:30:41.434 SO libspdk_event_nbd.so.6.0 00:30:41.434 SO libspdk_event_ublk.so.3.0 00:30:41.434 SO libspdk_event_scsi.so.6.0 00:30:41.434 SYMLINK libspdk_event_nbd.so 00:30:41.434 SYMLINK libspdk_event_ublk.so 00:30:41.434 LIB libspdk_event_nvmf.a 00:30:41.434 SYMLINK libspdk_event_scsi.so 00:30:41.434 SO libspdk_event_nvmf.so.6.0 00:30:41.692 SYMLINK libspdk_event_nvmf.so 00:30:41.692 CC module/event/subsystems/iscsi/iscsi.o 00:30:41.692 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:30:41.960 LIB libspdk_event_vhost_scsi.a 00:30:41.960 LIB libspdk_event_iscsi.a 00:30:41.960 SO libspdk_event_vhost_scsi.so.3.0 00:30:41.960 SO libspdk_event_iscsi.so.6.0 00:30:41.960 SYMLINK libspdk_event_vhost_scsi.so 00:30:41.960 SYMLINK libspdk_event_iscsi.so 00:30:42.217 SO libspdk.so.6.0 00:30:42.218 SYMLINK libspdk.so 00:30:42.474 TEST_HEADER include/spdk/accel.h 00:30:42.474 CC app/trace_record/trace_record.o 00:30:42.474 TEST_HEADER include/spdk/accel_module.h 00:30:42.474 CXX app/trace/trace.o 00:30:42.474 TEST_HEADER include/spdk/assert.h 00:30:42.474 TEST_HEADER include/spdk/barrier.h 00:30:42.474 TEST_HEADER include/spdk/base64.h 00:30:42.474 TEST_HEADER include/spdk/bdev.h 00:30:42.474 TEST_HEADER include/spdk/bdev_module.h 00:30:42.474 TEST_HEADER include/spdk/bdev_zone.h 00:30:42.474 TEST_HEADER include/spdk/bit_array.h 00:30:42.474 TEST_HEADER include/spdk/bit_pool.h 00:30:42.474 TEST_HEADER include/spdk/blob_bdev.h 00:30:42.474 TEST_HEADER include/spdk/blobfs_bdev.h 00:30:42.474 TEST_HEADER include/spdk/blobfs.h 00:30:42.474 TEST_HEADER include/spdk/blob.h 00:30:42.474 TEST_HEADER include/spdk/conf.h 00:30:42.474 TEST_HEADER include/spdk/config.h 00:30:42.474 TEST_HEADER include/spdk/cpuset.h 00:30:42.474 TEST_HEADER include/spdk/crc16.h 00:30:42.474 TEST_HEADER include/spdk/crc32.h 00:30:42.474 TEST_HEADER include/spdk/crc64.h 00:30:42.474 TEST_HEADER include/spdk/dif.h 00:30:42.474 CC examples/interrupt_tgt/interrupt_tgt.o 00:30:42.474 TEST_HEADER include/spdk/dma.h 00:30:42.474 TEST_HEADER include/spdk/endian.h 00:30:42.474 TEST_HEADER include/spdk/env_dpdk.h 00:30:42.474 TEST_HEADER include/spdk/env.h 00:30:42.474 TEST_HEADER include/spdk/event.h 00:30:42.474 TEST_HEADER include/spdk/fd_group.h 00:30:42.474 TEST_HEADER include/spdk/fd.h 00:30:42.475 TEST_HEADER include/spdk/file.h 00:30:42.475 TEST_HEADER include/spdk/fsdev.h 00:30:42.475 TEST_HEADER include/spdk/fsdev_module.h 00:30:42.475 TEST_HEADER include/spdk/ftl.h 00:30:42.475 CC examples/util/zipf/zipf.o 00:30:42.475 TEST_HEADER include/spdk/fuse_dispatcher.h 00:30:42.475 TEST_HEADER include/spdk/gpt_spec.h 00:30:42.475 CC examples/ioat/perf/perf.o 00:30:42.475 TEST_HEADER include/spdk/hexlify.h 00:30:42.475 TEST_HEADER include/spdk/histogram_data.h 00:30:42.475 TEST_HEADER include/spdk/idxd.h 00:30:42.475 TEST_HEADER include/spdk/idxd_spec.h 00:30:42.475 TEST_HEADER include/spdk/init.h 00:30:42.475 TEST_HEADER include/spdk/ioat.h 00:30:42.475 TEST_HEADER include/spdk/ioat_spec.h 00:30:42.475 TEST_HEADER include/spdk/iscsi_spec.h 00:30:42.475 TEST_HEADER include/spdk/json.h 00:30:42.475 TEST_HEADER include/spdk/jsonrpc.h 00:30:42.475 CC test/thread/poller_perf/poller_perf.o 00:30:42.475 TEST_HEADER include/spdk/keyring.h 00:30:42.475 TEST_HEADER include/spdk/keyring_module.h 00:30:42.475 TEST_HEADER include/spdk/likely.h 00:30:42.732 TEST_HEADER include/spdk/log.h 00:30:42.732 TEST_HEADER include/spdk/lvol.h 00:30:42.732 TEST_HEADER include/spdk/md5.h 00:30:42.732 TEST_HEADER include/spdk/memory.h 00:30:42.732 TEST_HEADER include/spdk/mmio.h 00:30:42.732 TEST_HEADER include/spdk/nbd.h 00:30:42.732 CC test/dma/test_dma/test_dma.o 00:30:42.732 TEST_HEADER include/spdk/net.h 00:30:42.732 TEST_HEADER include/spdk/notify.h 00:30:42.732 TEST_HEADER include/spdk/nvme.h 00:30:42.732 TEST_HEADER include/spdk/nvme_intel.h 00:30:42.732 TEST_HEADER include/spdk/nvme_ocssd.h 00:30:42.732 CC test/app/bdev_svc/bdev_svc.o 00:30:42.732 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:30:42.732 TEST_HEADER include/spdk/nvme_spec.h 00:30:42.732 TEST_HEADER include/spdk/nvme_zns.h 00:30:42.732 TEST_HEADER include/spdk/nvmf_cmd.h 00:30:42.732 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:30:42.732 TEST_HEADER include/spdk/nvmf.h 00:30:42.732 TEST_HEADER include/spdk/nvmf_spec.h 00:30:42.732 TEST_HEADER include/spdk/nvmf_transport.h 00:30:42.732 TEST_HEADER include/spdk/opal.h 00:30:42.732 TEST_HEADER include/spdk/opal_spec.h 00:30:42.732 TEST_HEADER include/spdk/pci_ids.h 00:30:42.732 TEST_HEADER include/spdk/pipe.h 00:30:42.732 TEST_HEADER include/spdk/queue.h 00:30:42.732 CC test/env/mem_callbacks/mem_callbacks.o 00:30:42.732 TEST_HEADER include/spdk/reduce.h 00:30:42.732 TEST_HEADER include/spdk/rpc.h 00:30:42.732 TEST_HEADER include/spdk/scheduler.h 00:30:42.732 TEST_HEADER include/spdk/scsi.h 00:30:42.732 TEST_HEADER include/spdk/scsi_spec.h 00:30:42.732 TEST_HEADER include/spdk/sock.h 00:30:42.732 TEST_HEADER include/spdk/stdinc.h 00:30:42.732 TEST_HEADER include/spdk/string.h 00:30:42.732 TEST_HEADER include/spdk/thread.h 00:30:42.732 TEST_HEADER include/spdk/trace.h 00:30:42.732 TEST_HEADER include/spdk/trace_parser.h 00:30:42.732 TEST_HEADER include/spdk/tree.h 00:30:42.732 TEST_HEADER include/spdk/ublk.h 00:30:42.732 TEST_HEADER include/spdk/util.h 00:30:42.732 TEST_HEADER include/spdk/uuid.h 00:30:42.732 TEST_HEADER include/spdk/version.h 00:30:42.732 TEST_HEADER include/spdk/vfio_user_pci.h 00:30:42.732 TEST_HEADER include/spdk/vfio_user_spec.h 00:30:42.732 TEST_HEADER include/spdk/vhost.h 00:30:42.732 TEST_HEADER include/spdk/vmd.h 00:30:42.732 TEST_HEADER include/spdk/xor.h 00:30:42.732 TEST_HEADER include/spdk/zipf.h 00:30:42.732 CXX test/cpp_headers/accel.o 00:30:42.732 LINK zipf 00:30:42.732 LINK interrupt_tgt 00:30:42.732 LINK poller_perf 00:30:42.732 LINK spdk_trace_record 00:30:42.732 LINK ioat_perf 00:30:42.989 LINK bdev_svc 00:30:42.989 CXX test/cpp_headers/accel_module.o 00:30:42.989 LINK spdk_trace 00:30:42.989 CC test/app/histogram_perf/histogram_perf.o 00:30:42.989 CC examples/ioat/verify/verify.o 00:30:42.989 CC test/app/jsoncat/jsoncat.o 00:30:42.989 CC test/rpc_client/rpc_client_test.o 00:30:43.246 CXX test/cpp_headers/assert.o 00:30:43.246 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:30:43.246 CC test/app/stub/stub.o 00:30:43.246 LINK histogram_perf 00:30:43.246 LINK jsoncat 00:30:43.246 LINK test_dma 00:30:43.246 CXX test/cpp_headers/barrier.o 00:30:43.246 LINK rpc_client_test 00:30:43.246 CC app/nvmf_tgt/nvmf_main.o 00:30:43.246 LINK verify 00:30:43.246 LINK mem_callbacks 00:30:43.504 LINK stub 00:30:43.504 CC test/env/vtophys/vtophys.o 00:30:43.504 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:30:43.504 LINK nvmf_tgt 00:30:43.504 CXX test/cpp_headers/base64.o 00:30:43.504 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:30:43.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:30:43.761 LINK nvme_fuzz 00:30:43.761 CC app/iscsi_tgt/iscsi_tgt.o 00:30:43.761 LINK vtophys 00:30:43.761 LINK env_dpdk_post_init 00:30:43.761 CC examples/sock/hello_world/hello_sock.o 00:30:43.761 CXX test/cpp_headers/bdev.o 00:30:43.761 CC examples/thread/thread/thread_ex.o 00:30:43.761 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:30:44.019 CC app/spdk_tgt/spdk_tgt.o 00:30:44.019 LINK iscsi_tgt 00:30:44.019 CC test/env/memory/memory_ut.o 00:30:44.019 CXX test/cpp_headers/bdev_module.o 00:30:44.019 LINK hello_sock 00:30:44.019 LINK thread 00:30:44.019 CC test/event/event_perf/event_perf.o 00:30:44.019 CC test/nvme/aer/aer.o 00:30:44.277 LINK spdk_tgt 00:30:44.277 CC test/nvme/reset/reset.o 00:30:44.277 CXX test/cpp_headers/bdev_zone.o 00:30:44.277 LINK event_perf 00:30:44.277 CC test/nvme/sgl/sgl.o 00:30:44.277 LINK vhost_fuzz 00:30:44.536 CC app/spdk_lspci/spdk_lspci.o 00:30:44.536 CC examples/vmd/lsvmd/lsvmd.o 00:30:44.536 CXX test/cpp_headers/bit_array.o 00:30:44.536 LINK aer 00:30:44.536 CC test/event/reactor/reactor.o 00:30:44.536 LINK reset 00:30:44.536 CC examples/vmd/led/led.o 00:30:44.536 LINK spdk_lspci 00:30:44.536 LINK lsvmd 00:30:44.536 CXX test/cpp_headers/bit_pool.o 00:30:44.536 LINK reactor 00:30:44.795 LINK sgl 00:30:44.795 CXX test/cpp_headers/blob_bdev.o 00:30:44.795 CXX test/cpp_headers/blobfs_bdev.o 00:30:44.795 LINK led 00:30:44.795 CXX test/cpp_headers/blobfs.o 00:30:44.795 CC app/spdk_nvme_perf/perf.o 00:30:45.053 CC test/event/reactor_perf/reactor_perf.o 00:30:45.053 CXX test/cpp_headers/blob.o 00:30:45.053 CC test/nvme/e2edp/nvme_dp.o 00:30:45.053 CC app/spdk_nvme_identify/identify.o 00:30:45.053 CC app/spdk_nvme_discover/discovery_aer.o 00:30:45.053 CC test/env/pci/pci_ut.o 00:30:45.053 CC examples/idxd/perf/perf.o 00:30:45.053 LINK reactor_perf 00:30:45.053 CXX test/cpp_headers/conf.o 00:30:45.310 LINK spdk_nvme_discover 00:30:45.310 LINK nvme_dp 00:30:45.310 CXX test/cpp_headers/config.o 00:30:45.310 CXX test/cpp_headers/cpuset.o 00:30:45.310 CC test/event/app_repeat/app_repeat.o 00:30:45.568 LINK memory_ut 00:30:45.568 CXX test/cpp_headers/crc16.o 00:30:45.568 LINK idxd_perf 00:30:45.568 CC test/nvme/overhead/overhead.o 00:30:45.568 CC test/event/scheduler/scheduler.o 00:30:45.568 LINK app_repeat 00:30:45.568 LINK pci_ut 00:30:45.568 CXX test/cpp_headers/crc32.o 00:30:45.568 CXX test/cpp_headers/crc64.o 00:30:45.826 LINK scheduler 00:30:45.826 CXX test/cpp_headers/dif.o 00:30:45.826 CXX test/cpp_headers/dma.o 00:30:45.826 LINK overhead 00:30:45.826 CC examples/fsdev/hello_world/hello_fsdev.o 00:30:45.826 LINK spdk_nvme_perf 00:30:45.826 LINK iscsi_fuzz 00:30:45.826 CC examples/accel/perf/accel_perf.o 00:30:46.084 LINK spdk_nvme_identify 00:30:46.084 CXX test/cpp_headers/endian.o 00:30:46.084 CXX test/cpp_headers/env_dpdk.o 00:30:46.084 CXX test/cpp_headers/env.o 00:30:46.084 CC test/accel/dif/dif.o 00:30:46.084 CC test/nvme/err_injection/err_injection.o 00:30:46.084 CC test/nvme/startup/startup.o 00:30:46.084 CXX test/cpp_headers/event.o 00:30:46.342 LINK hello_fsdev 00:30:46.342 CXX test/cpp_headers/fd_group.o 00:30:46.342 CC app/spdk_top/spdk_top.o 00:30:46.342 LINK err_injection 00:30:46.342 LINK startup 00:30:46.342 CXX test/cpp_headers/fd.o 00:30:46.342 CXX test/cpp_headers/file.o 00:30:46.600 CC examples/nvme/hello_world/hello_world.o 00:30:46.600 CC examples/blob/hello_world/hello_blob.o 00:30:46.600 LINK accel_perf 00:30:46.600 CXX test/cpp_headers/fsdev.o 00:30:46.600 CC test/nvme/reserve/reserve.o 00:30:46.600 CC test/blobfs/mkfs/mkfs.o 00:30:46.600 CC examples/blob/cli/blobcli.o 00:30:46.857 CC examples/nvme/reconnect/reconnect.o 00:30:46.857 CXX test/cpp_headers/fsdev_module.o 00:30:46.857 LINK hello_world 00:30:46.857 CC examples/nvme/nvme_manage/nvme_manage.o 00:30:46.857 LINK hello_blob 00:30:46.857 LINK reserve 00:30:46.857 LINK mkfs 00:30:46.857 CXX test/cpp_headers/ftl.o 00:30:46.857 CXX test/cpp_headers/fuse_dispatcher.o 00:30:47.115 LINK dif 00:30:47.115 CXX test/cpp_headers/gpt_spec.o 00:30:47.115 CC test/nvme/simple_copy/simple_copy.o 00:30:47.115 LINK reconnect 00:30:47.115 CC examples/nvme/hotplug/hotplug.o 00:30:47.115 CC examples/nvme/arbitration/arbitration.o 00:30:47.115 CXX test/cpp_headers/hexlify.o 00:30:47.373 CC examples/nvme/cmb_copy/cmb_copy.o 00:30:47.373 LINK blobcli 00:30:47.373 CC examples/nvme/abort/abort.o 00:30:47.373 LINK simple_copy 00:30:47.373 CXX test/cpp_headers/histogram_data.o 00:30:47.373 LINK spdk_top 00:30:47.373 CC test/nvme/connect_stress/connect_stress.o 00:30:47.373 LINK cmb_copy 00:30:47.373 LINK nvme_manage 00:30:47.373 LINK hotplug 00:30:47.632 CXX test/cpp_headers/idxd.o 00:30:47.632 LINK arbitration 00:30:47.632 LINK connect_stress 00:30:47.632 CC app/spdk_dd/spdk_dd.o 00:30:47.632 CC test/nvme/boot_partition/boot_partition.o 00:30:47.632 CXX test/cpp_headers/idxd_spec.o 00:30:47.632 CC app/vhost/vhost.o 00:30:47.632 CC test/nvme/compliance/nvme_compliance.o 00:30:47.632 CC test/nvme/fused_ordering/fused_ordering.o 00:30:47.890 LINK abort 00:30:47.890 CC app/fio/nvme/fio_plugin.o 00:30:47.890 CC app/fio/bdev/fio_plugin.o 00:30:47.890 CXX test/cpp_headers/init.o 00:30:47.890 LINK boot_partition 00:30:47.890 LINK vhost 00:30:47.890 LINK fused_ordering 00:30:48.149 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:30:48.149 CXX test/cpp_headers/ioat.o 00:30:48.149 CC test/lvol/esnap/esnap.o 00:30:48.149 LINK spdk_dd 00:30:48.149 LINK nvme_compliance 00:30:48.149 CXX test/cpp_headers/ioat_spec.o 00:30:48.149 LINK pmr_persistence 00:30:48.407 CC test/bdev/bdevio/bdevio.o 00:30:48.407 CC examples/bdev/hello_world/hello_bdev.o 00:30:48.407 CXX test/cpp_headers/iscsi_spec.o 00:30:48.407 CXX test/cpp_headers/json.o 00:30:48.407 CC examples/bdev/bdevperf/bdevperf.o 00:30:48.407 CC test/nvme/doorbell_aers/doorbell_aers.o 00:30:48.407 CXX test/cpp_headers/jsonrpc.o 00:30:48.709 LINK spdk_bdev 00:30:48.709 CXX test/cpp_headers/keyring.o 00:30:48.709 LINK spdk_nvme 00:30:48.709 CXX test/cpp_headers/keyring_module.o 00:30:48.709 CC test/nvme/fdp/fdp.o 00:30:48.709 LINK hello_bdev 00:30:48.709 CXX test/cpp_headers/likely.o 00:30:48.709 LINK doorbell_aers 00:30:48.709 CXX test/cpp_headers/log.o 00:30:48.709 CC test/nvme/cuse/cuse.o 00:30:48.974 LINK bdevio 00:30:48.974 CXX test/cpp_headers/lvol.o 00:30:48.974 CXX test/cpp_headers/md5.o 00:30:48.974 CXX test/cpp_headers/memory.o 00:30:48.974 CXX test/cpp_headers/mmio.o 00:30:48.974 CXX test/cpp_headers/nbd.o 00:30:48.974 CXX test/cpp_headers/net.o 00:30:48.974 CXX test/cpp_headers/notify.o 00:30:48.974 CXX test/cpp_headers/nvme.o 00:30:48.974 CXX test/cpp_headers/nvme_intel.o 00:30:48.974 CXX test/cpp_headers/nvme_ocssd.o 00:30:48.974 LINK fdp 00:30:48.974 CXX test/cpp_headers/nvme_ocssd_spec.o 00:30:49.231 CXX test/cpp_headers/nvme_spec.o 00:30:49.231 CXX test/cpp_headers/nvme_zns.o 00:30:49.231 CXX test/cpp_headers/nvmf_cmd.o 00:30:49.231 CXX test/cpp_headers/nvmf_fc_spec.o 00:30:49.231 CXX test/cpp_headers/nvmf.o 00:30:49.231 CXX test/cpp_headers/nvmf_spec.o 00:30:49.231 CXX test/cpp_headers/nvmf_transport.o 00:30:49.231 CXX test/cpp_headers/opal.o 00:30:49.489 CXX test/cpp_headers/opal_spec.o 00:30:49.489 CXX test/cpp_headers/pci_ids.o 00:30:49.489 CXX test/cpp_headers/pipe.o 00:30:49.489 CXX test/cpp_headers/queue.o 00:30:49.489 CXX test/cpp_headers/reduce.o 00:30:49.489 LINK bdevperf 00:30:49.489 CXX test/cpp_headers/rpc.o 00:30:49.489 CXX test/cpp_headers/scheduler.o 00:30:49.489 CXX test/cpp_headers/scsi.o 00:30:49.489 CXX test/cpp_headers/scsi_spec.o 00:30:49.746 CXX test/cpp_headers/sock.o 00:30:49.746 CXX test/cpp_headers/stdinc.o 00:30:49.747 CXX test/cpp_headers/string.o 00:30:49.747 CXX test/cpp_headers/thread.o 00:30:49.747 CXX test/cpp_headers/trace.o 00:30:49.747 CXX test/cpp_headers/trace_parser.o 00:30:49.747 CXX test/cpp_headers/tree.o 00:30:49.747 CXX test/cpp_headers/ublk.o 00:30:49.747 CXX test/cpp_headers/util.o 00:30:49.747 CXX test/cpp_headers/uuid.o 00:30:49.747 CXX test/cpp_headers/version.o 00:30:49.747 CXX test/cpp_headers/vfio_user_pci.o 00:30:50.004 CXX test/cpp_headers/vfio_user_spec.o 00:30:50.004 CC examples/nvmf/nvmf/nvmf.o 00:30:50.004 CXX test/cpp_headers/vhost.o 00:30:50.004 CXX test/cpp_headers/vmd.o 00:30:50.004 CXX test/cpp_headers/xor.o 00:30:50.004 CXX test/cpp_headers/zipf.o 00:30:50.262 LINK nvmf 00:30:50.519 LINK cuse 00:30:54.729 LINK esnap 00:30:54.988 00:30:54.988 real 1m38.172s 00:30:54.988 user 8m51.230s 00:30:54.988 sys 1m50.029s 00:30:54.988 05:21:41 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:30:54.988 05:21:41 make -- common/autotest_common.sh@10 -- $ set +x 00:30:54.988 ************************************ 00:30:54.988 END TEST make 00:30:54.988 ************************************ 00:30:54.988 05:21:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:30:54.988 05:21:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:54.988 05:21:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:54.988 05:21:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:54.988 05:21:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:54.988 05:21:41 -- pm/common@44 -- $ pid=5301 00:30:54.988 05:21:41 -- pm/common@50 -- $ kill -TERM 5301 00:30:54.988 05:21:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:54.988 05:21:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:54.988 05:21:41 -- pm/common@44 -- $ pid=5303 00:30:54.988 05:21:41 -- pm/common@50 -- $ kill -TERM 5303 00:30:54.988 05:21:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:30:54.988 05:21:41 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:30:55.247 05:21:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.247 05:21:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.247 05:21:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.247 05:21:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.247 05:21:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.247 05:21:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.247 05:21:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.247 05:21:42 -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.247 05:21:42 -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.247 05:21:42 -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.247 05:21:42 -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.247 05:21:42 -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.247 05:21:42 -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.247 05:21:42 -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.247 05:21:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.247 05:21:42 -- scripts/common.sh@344 -- # case "$op" in 00:30:55.247 05:21:42 -- scripts/common.sh@345 -- # : 1 00:30:55.247 05:21:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.247 05:21:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.247 05:21:42 -- scripts/common.sh@365 -- # decimal 1 00:30:55.247 05:21:42 -- scripts/common.sh@353 -- # local d=1 00:30:55.247 05:21:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.247 05:21:42 -- scripts/common.sh@355 -- # echo 1 00:30:55.247 05:21:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.247 05:21:42 -- scripts/common.sh@366 -- # decimal 2 00:30:55.248 05:21:42 -- scripts/common.sh@353 -- # local d=2 00:30:55.248 05:21:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.248 05:21:42 -- scripts/common.sh@355 -- # echo 2 00:30:55.248 05:21:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.248 05:21:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.248 05:21:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.248 05:21:42 -- scripts/common.sh@368 -- # return 0 00:30:55.248 05:21:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.248 05:21:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.248 --rc genhtml_branch_coverage=1 00:30:55.248 --rc genhtml_function_coverage=1 00:30:55.248 --rc genhtml_legend=1 00:30:55.248 --rc geninfo_all_blocks=1 00:30:55.248 --rc geninfo_unexecuted_blocks=1 00:30:55.248 00:30:55.248 ' 00:30:55.248 05:21:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.248 --rc genhtml_branch_coverage=1 00:30:55.248 --rc genhtml_function_coverage=1 00:30:55.248 --rc genhtml_legend=1 00:30:55.248 --rc geninfo_all_blocks=1 00:30:55.248 --rc geninfo_unexecuted_blocks=1 00:30:55.248 00:30:55.248 ' 00:30:55.248 05:21:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.248 --rc genhtml_branch_coverage=1 00:30:55.248 --rc genhtml_function_coverage=1 00:30:55.248 --rc genhtml_legend=1 00:30:55.248 --rc geninfo_all_blocks=1 00:30:55.248 --rc geninfo_unexecuted_blocks=1 00:30:55.248 00:30:55.248 ' 00:30:55.248 05:21:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.248 --rc genhtml_branch_coverage=1 00:30:55.248 --rc genhtml_function_coverage=1 00:30:55.248 --rc genhtml_legend=1 00:30:55.248 --rc geninfo_all_blocks=1 00:30:55.248 --rc geninfo_unexecuted_blocks=1 00:30:55.248 00:30:55.248 ' 00:30:55.248 05:21:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:55.248 05:21:42 -- nvmf/common.sh@7 -- # uname -s 00:30:55.248 05:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.248 05:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.248 05:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.248 05:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.248 05:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.248 05:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.248 05:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.248 05:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.248 05:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.248 05:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.248 05:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b7a55eba-b4a9-45b1-b3eb-0a1190fde04b 00:30:55.248 05:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=b7a55eba-b4a9-45b1-b3eb-0a1190fde04b 00:30:55.248 05:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.248 05:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.248 05:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:55.248 05:21:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.248 05:21:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:55.248 05:21:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.248 05:21:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.248 05:21:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.248 05:21:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.248 05:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 05:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 05:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 05:21:42 -- paths/export.sh@5 -- # export PATH 00:30:55.248 05:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 05:21:42 -- nvmf/common.sh@51 -- # : 0 00:30:55.248 05:21:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.248 05:21:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.248 05:21:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.248 05:21:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.248 05:21:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.248 05:21:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.248 05:21:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.248 05:21:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.248 05:21:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.248 05:21:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:30:55.248 05:21:42 -- spdk/autotest.sh@32 -- # uname -s 00:30:55.248 05:21:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:30:55.248 05:21:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:30:55.248 05:21:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:30:55.248 05:21:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:30:55.248 05:21:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:30:55.248 05:21:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:30:55.507 05:21:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:30:55.507 05:21:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:30:55.507 05:21:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54387 00:30:55.507 05:21:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:30:55.507 05:21:42 -- pm/common@17 -- # local monitor 00:30:55.507 05:21:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.507 05:21:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:30:55.507 05:21:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.507 05:21:42 -- pm/common@25 -- # sleep 1 00:30:55.507 05:21:42 -- pm/common@21 -- # date +%s 00:30:55.507 05:21:42 -- pm/common@21 -- # date +%s 00:30:55.507 05:21:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733721702 00:30:55.507 05:21:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733721702 00:30:55.507 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733721702_collect-cpu-load.pm.log 00:30:55.507 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733721702_collect-vmstat.pm.log 00:30:56.442 05:21:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:30:56.442 05:21:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:30:56.442 05:21:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.442 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:30:56.442 05:21:43 -- spdk/autotest.sh@59 -- # create_test_list 00:30:56.442 05:21:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:30:56.442 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:30:56.442 05:21:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:30:56.442 05:21:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:30:56.442 05:21:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:30:56.442 05:21:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:30:56.442 05:21:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:30:56.442 05:21:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:30:56.442 05:21:43 -- common/autotest_common.sh@1457 -- # uname 00:30:56.442 05:21:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:30:56.442 05:21:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:30:56.442 05:21:43 -- common/autotest_common.sh@1477 -- # uname 00:30:56.442 05:21:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:30:56.442 05:21:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:30:56.442 05:21:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:30:56.700 lcov: LCOV version 1.15 00:30:56.700 05:21:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:31:11.576 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:31:11.576 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:31:26.472 05:22:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:31:26.472 05:22:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.472 05:22:13 -- common/autotest_common.sh@10 -- # set +x 00:31:26.472 05:22:13 -- spdk/autotest.sh@78 -- # rm -f 00:31:26.472 05:22:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:27.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.037 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:27.037 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:27.037 05:22:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:31:27.037 05:22:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:31:27.037 05:22:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:31:27.037 05:22:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:31:27.037 05:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:27.038 05:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:31:27.038 05:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:27.038 05:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:27.038 05:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:31:27.038 05:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:31:27.038 05:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:27.038 05:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:31:27.038 05:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:31:27.038 05:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:27.038 05:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:31:27.038 05:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:31:27.038 05:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:31:27.038 05:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:27.038 05:22:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:31:27.038 05:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:27.038 05:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:27.038 05:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:31:27.038 05:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:31:27.038 05:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:31:27.296 No valid GPT data, bailing 00:31:27.296 05:22:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:27.296 05:22:14 -- scripts/common.sh@394 -- # pt= 00:31:27.296 05:22:14 -- scripts/common.sh@395 -- # return 1 00:31:27.296 05:22:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:31:27.296 1+0 records in 00:31:27.296 1+0 records out 00:31:27.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516607 s, 203 MB/s 00:31:27.296 05:22:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:27.296 05:22:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:27.296 05:22:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:31:27.296 05:22:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:31:27.296 05:22:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:31:27.296 No valid GPT data, bailing 00:31:27.296 05:22:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:27.296 05:22:14 -- scripts/common.sh@394 -- # pt= 00:31:27.296 05:22:14 -- scripts/common.sh@395 -- # return 1 00:31:27.296 05:22:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:31:27.296 1+0 records in 00:31:27.296 1+0 records out 00:31:27.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406183 s, 258 MB/s 00:31:27.296 05:22:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:27.296 05:22:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:27.296 05:22:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:31:27.296 05:22:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:31:27.296 05:22:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:31:27.296 No valid GPT data, bailing 00:31:27.296 05:22:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:31:27.297 05:22:14 -- scripts/common.sh@394 -- # pt= 00:31:27.297 05:22:14 -- scripts/common.sh@395 -- # return 1 00:31:27.297 05:22:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:31:27.297 1+0 records in 00:31:27.297 1+0 records out 00:31:27.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555233 s, 189 MB/s 00:31:27.297 05:22:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:27.297 05:22:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:27.297 05:22:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:31:27.297 05:22:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:31:27.297 05:22:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:31:27.555 No valid GPT data, bailing 00:31:27.555 05:22:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:31:27.555 05:22:14 -- scripts/common.sh@394 -- # pt= 00:31:27.555 05:22:14 -- scripts/common.sh@395 -- # return 1 00:31:27.555 05:22:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:31:27.555 1+0 records in 00:31:27.555 1+0 records out 00:31:27.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441609 s, 237 MB/s 00:31:27.555 05:22:14 -- spdk/autotest.sh@105 -- # sync 00:31:27.555 05:22:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:31:27.555 05:22:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:31:27.555 05:22:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:31:30.086 05:22:16 -- spdk/autotest.sh@111 -- # uname -s 00:31:30.086 05:22:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:31:30.086 05:22:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:31:30.086 05:22:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:31:30.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:30.344 Hugepages 00:31:30.344 node hugesize free / total 00:31:30.344 node0 1048576kB 0 / 0 00:31:30.344 node0 2048kB 0 / 0 00:31:30.344 00:31:30.344 Type BDF Vendor Device NUMA Driver Device Block devices 00:31:30.603 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:31:30.603 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:31:30.603 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:31:30.603 05:22:17 -- spdk/autotest.sh@117 -- # uname -s 00:31:30.603 05:22:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:31:30.603 05:22:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:31:30.603 05:22:17 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:31.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:31.550 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:31.550 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:31.550 05:22:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:31:32.485 05:22:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:31:32.485 05:22:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:31:32.485 05:22:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:31:32.485 05:22:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:31:32.485 05:22:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:32.485 05:22:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:32.485 05:22:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:32.485 05:22:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:32.485 05:22:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:32.743 05:22:19 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:31:32.743 05:22:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:32.743 05:22:19 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:33.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:33.001 Waiting for block devices as requested 00:31:33.001 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:33.259 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:33.259 05:22:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:31:33.259 05:22:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:31:33.259 05:22:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:31:33.259 05:22:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:31:33.259 05:22:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:31:33.259 05:22:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1543 -- # continue 00:31:33.259 05:22:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:31:33.259 05:22:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:31:33.259 05:22:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:31:33.259 05:22:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:31:33.259 05:22:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:31:33.259 05:22:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:31:33.259 05:22:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:31:33.259 05:22:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:31:33.259 05:22:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:31:33.259 05:22:20 -- common/autotest_common.sh@1543 -- # continue 00:31:33.259 05:22:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:31:33.259 05:22:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:33.259 05:22:20 -- common/autotest_common.sh@10 -- # set +x 00:31:33.259 05:22:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:31:33.259 05:22:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.259 05:22:20 -- common/autotest_common.sh@10 -- # set +x 00:31:33.517 05:22:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:34.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:34.085 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:34.343 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:34.343 05:22:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:31:34.343 05:22:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.343 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:31:34.343 05:22:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:31:34.343 05:22:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:31:34.343 05:22:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:31:34.343 05:22:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:31:34.343 05:22:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:31:34.343 05:22:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:31:34.343 05:22:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:31:34.343 05:22:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:31:34.343 05:22:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:34.343 05:22:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:34.343 05:22:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:34.343 05:22:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:34.343 05:22:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:34.343 05:22:21 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:31:34.343 05:22:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:34.343 05:22:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:31:34.343 05:22:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:31:34.343 05:22:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:31:34.343 05:22:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:31:34.343 05:22:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:31:34.343 05:22:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:31:34.343 05:22:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:31:34.343 05:22:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:31:34.343 05:22:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:31:34.343 05:22:21 -- common/autotest_common.sh@1572 -- # return 0 00:31:34.343 05:22:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:31:34.343 05:22:21 -- common/autotest_common.sh@1580 -- # return 0 00:31:34.343 05:22:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:31:34.343 05:22:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:31:34.343 05:22:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:31:34.343 05:22:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:31:34.343 05:22:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:31:34.343 05:22:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.343 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:31:34.343 05:22:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:31:34.343 05:22:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:31:34.343 05:22:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:34.343 05:22:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.343 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:31:34.343 ************************************ 00:31:34.343 START TEST env 00:31:34.343 ************************************ 00:31:34.343 05:22:21 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:31:34.602 * Looking for test storage... 00:31:34.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:31:34.602 05:22:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:34.602 05:22:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:31:34.602 05:22:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:34.602 05:22:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:34.602 05:22:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.602 05:22:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.602 05:22:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.602 05:22:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.602 05:22:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.602 05:22:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.602 05:22:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.602 05:22:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.602 05:22:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.602 05:22:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.602 05:22:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.602 05:22:21 env -- scripts/common.sh@344 -- # case "$op" in 00:31:34.602 05:22:21 env -- scripts/common.sh@345 -- # : 1 00:31:34.602 05:22:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.602 05:22:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.602 05:22:21 env -- scripts/common.sh@365 -- # decimal 1 00:31:34.602 05:22:21 env -- scripts/common.sh@353 -- # local d=1 00:31:34.602 05:22:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.602 05:22:21 env -- scripts/common.sh@355 -- # echo 1 00:31:34.602 05:22:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.602 05:22:21 env -- scripts/common.sh@366 -- # decimal 2 00:31:34.602 05:22:21 env -- scripts/common.sh@353 -- # local d=2 00:31:34.603 05:22:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.603 05:22:21 env -- scripts/common.sh@355 -- # echo 2 00:31:34.603 05:22:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.603 05:22:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.603 05:22:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.603 05:22:21 env -- scripts/common.sh@368 -- # return 0 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:34.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.603 --rc genhtml_branch_coverage=1 00:31:34.603 --rc genhtml_function_coverage=1 00:31:34.603 --rc genhtml_legend=1 00:31:34.603 --rc geninfo_all_blocks=1 00:31:34.603 --rc geninfo_unexecuted_blocks=1 00:31:34.603 00:31:34.603 ' 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:34.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.603 --rc genhtml_branch_coverage=1 00:31:34.603 --rc genhtml_function_coverage=1 00:31:34.603 --rc genhtml_legend=1 00:31:34.603 --rc geninfo_all_blocks=1 00:31:34.603 --rc geninfo_unexecuted_blocks=1 00:31:34.603 00:31:34.603 ' 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:34.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.603 --rc genhtml_branch_coverage=1 00:31:34.603 --rc genhtml_function_coverage=1 00:31:34.603 --rc genhtml_legend=1 00:31:34.603 --rc geninfo_all_blocks=1 00:31:34.603 --rc geninfo_unexecuted_blocks=1 00:31:34.603 00:31:34.603 ' 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:34.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.603 --rc genhtml_branch_coverage=1 00:31:34.603 --rc genhtml_function_coverage=1 00:31:34.603 --rc genhtml_legend=1 00:31:34.603 --rc geninfo_all_blocks=1 00:31:34.603 --rc geninfo_unexecuted_blocks=1 00:31:34.603 00:31:34.603 ' 00:31:34.603 05:22:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:34.603 05:22:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.603 05:22:21 env -- common/autotest_common.sh@10 -- # set +x 00:31:34.603 ************************************ 00:31:34.603 START TEST env_memory 00:31:34.603 ************************************ 00:31:34.603 05:22:21 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:31:34.603 00:31:34.603 00:31:34.603 CUnit - A unit testing framework for C - Version 2.1-3 00:31:34.603 http://cunit.sourceforge.net/ 00:31:34.603 00:31:34.603 00:31:34.603 Suite: mem_map_2mb 00:31:34.861 Test: alloc and free memory map ...[2024-12-09 05:22:21.583974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:31:34.861 passed 00:31:34.861 Test: mem map translation ...[2024-12-09 05:22:21.647495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:31:34.861 [2024-12-09 05:22:21.647817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:31:34.861 [2024-12-09 05:22:21.647944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:31:34.861 [2024-12-09 05:22:21.647981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:31:34.861 passed 00:31:34.861 Test: mem map registration ...[2024-12-09 05:22:21.748974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:31:34.861 [2024-12-09 05:22:21.749056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:31:34.861 passed 00:31:35.120 Test: mem map adjacent registrations ...passed 00:31:35.120 Suite: mem_map_4kb 00:31:35.120 Test: alloc and free memory map ...[2024-12-09 05:22:21.931611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:31:35.120 passed 00:31:35.120 Test: mem map translation ...[2024-12-09 05:22:21.984742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:31:35.120 [2024-12-09 05:22:21.984888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:31:35.120 [2024-12-09 05:22:22.008723] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:31:35.120 [2024-12-09 05:22:22.008838] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:31:35.379 passed 00:31:35.379 Test: mem map registration ...[2024-12-09 05:22:22.123905] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:31:35.379 [2024-12-09 05:22:22.124037] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:31:35.379 passed 00:31:35.379 Test: mem map adjacent registrations ...passed 00:31:35.379 00:31:35.379 Run Summary: Type Total Ran Passed Failed Inactive 00:31:35.379 suites 2 2 n/a 0 0 00:31:35.379 tests 8 8 8 0 0 00:31:35.379 asserts 304 304 304 0 n/a 00:31:35.379 00:31:35.379 Elapsed time = 0.743 seconds 00:31:35.379 00:31:35.379 real 0m0.798s 00:31:35.379 user 0m0.737s 00:31:35.379 sys 0m0.043s 00:31:35.379 05:22:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.379 05:22:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:31:35.379 ************************************ 00:31:35.379 END TEST env_memory 00:31:35.379 ************************************ 00:31:35.379 05:22:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:31:35.379 05:22:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:35.379 05:22:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.379 05:22:22 env -- common/autotest_common.sh@10 -- # set +x 00:31:35.638 ************************************ 00:31:35.638 START TEST env_vtophys 00:31:35.638 ************************************ 00:31:35.638 05:22:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:31:35.638 EAL: lib.eal log level changed from notice to debug 00:31:35.638 EAL: Detected lcore 0 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 1 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 2 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 3 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 4 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 5 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 6 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 7 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 8 as core 0 on socket 0 00:31:35.638 EAL: Detected lcore 9 as core 0 on socket 0 00:31:35.638 EAL: Maximum logical cores by configuration: 128 00:31:35.638 EAL: Detected CPU lcores: 10 00:31:35.638 EAL: Detected NUMA nodes: 1 00:31:35.638 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:31:35.638 EAL: Detected shared linkage of DPDK 00:31:35.638 EAL: No shared files mode enabled, IPC will be disabled 00:31:35.638 EAL: Selected IOVA mode 'PA' 00:31:35.638 EAL: Probing VFIO support... 00:31:35.638 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:31:35.638 EAL: VFIO modules not loaded, skipping VFIO support... 00:31:35.638 EAL: Ask a virtual area of 0x2e000 bytes 00:31:35.638 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:31:35.638 EAL: Setting up physically contiguous memory... 00:31:35.638 EAL: Setting maximum number of open files to 524288 00:31:35.638 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:31:35.638 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:31:35.638 EAL: Ask a virtual area of 0x61000 bytes 00:31:35.638 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:31:35.638 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:35.638 EAL: Ask a virtual area of 0x400000000 bytes 00:31:35.638 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:31:35.638 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:31:35.638 EAL: Ask a virtual area of 0x61000 bytes 00:31:35.638 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:31:35.638 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:35.638 EAL: Ask a virtual area of 0x400000000 bytes 00:31:35.638 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:31:35.638 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:31:35.638 EAL: Ask a virtual area of 0x61000 bytes 00:31:35.638 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:31:35.638 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:35.638 EAL: Ask a virtual area of 0x400000000 bytes 00:31:35.638 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:31:35.638 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:31:35.638 EAL: Ask a virtual area of 0x61000 bytes 00:31:35.638 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:31:35.638 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:35.638 EAL: Ask a virtual area of 0x400000000 bytes 00:31:35.638 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:31:35.638 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:31:35.638 EAL: Hugepages will be freed exactly as allocated. 00:31:35.638 EAL: No shared files mode enabled, IPC is disabled 00:31:35.638 EAL: No shared files mode enabled, IPC is disabled 00:31:35.638 EAL: TSC frequency is ~2200000 KHz 00:31:35.638 EAL: Main lcore 0 is ready (tid=7f0e10a55a40;cpuset=[0]) 00:31:35.638 EAL: Trying to obtain current memory policy. 00:31:35.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:35.638 EAL: Restoring previous memory policy: 0 00:31:35.638 EAL: request: mp_malloc_sync 00:31:35.638 EAL: No shared files mode enabled, IPC is disabled 00:31:35.638 EAL: Heap on socket 0 was expanded by 2MB 00:31:35.638 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:31:35.638 EAL: No PCI address specified using 'addr=' in: bus=pci 00:31:35.638 EAL: Mem event callback 'spdk:(nil)' registered 00:31:35.638 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:31:35.897 00:31:35.897 00:31:35.897 CUnit - A unit testing framework for C - Version 2.1-3 00:31:35.897 http://cunit.sourceforge.net/ 00:31:35.897 00:31:35.897 00:31:35.897 Suite: components_suite 00:31:36.155 Test: vtophys_malloc_test ...passed 00:31:36.414 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:31:36.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.414 EAL: Restoring previous memory policy: 4 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was expanded by 4MB 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was shrunk by 4MB 00:31:36.414 EAL: Trying to obtain current memory policy. 00:31:36.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.414 EAL: Restoring previous memory policy: 4 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was expanded by 6MB 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was shrunk by 6MB 00:31:36.414 EAL: Trying to obtain current memory policy. 00:31:36.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.414 EAL: Restoring previous memory policy: 4 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was expanded by 10MB 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was shrunk by 10MB 00:31:36.414 EAL: Trying to obtain current memory policy. 00:31:36.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.414 EAL: Restoring previous memory policy: 4 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was expanded by 18MB 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was shrunk by 18MB 00:31:36.414 EAL: Trying to obtain current memory policy. 00:31:36.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.414 EAL: Restoring previous memory policy: 4 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was expanded by 34MB 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was shrunk by 34MB 00:31:36.414 EAL: Trying to obtain current memory policy. 00:31:36.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.414 EAL: Restoring previous memory policy: 4 00:31:36.414 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.414 EAL: request: mp_malloc_sync 00:31:36.414 EAL: No shared files mode enabled, IPC is disabled 00:31:36.414 EAL: Heap on socket 0 was expanded by 66MB 00:31:36.680 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.680 EAL: request: mp_malloc_sync 00:31:36.680 EAL: No shared files mode enabled, IPC is disabled 00:31:36.680 EAL: Heap on socket 0 was shrunk by 66MB 00:31:36.680 EAL: Trying to obtain current memory policy. 00:31:36.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:36.680 EAL: Restoring previous memory policy: 4 00:31:36.680 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.680 EAL: request: mp_malloc_sync 00:31:36.680 EAL: No shared files mode enabled, IPC is disabled 00:31:36.680 EAL: Heap on socket 0 was expanded by 130MB 00:31:36.938 EAL: Calling mem event callback 'spdk:(nil)' 00:31:36.938 EAL: request: mp_malloc_sync 00:31:36.938 EAL: No shared files mode enabled, IPC is disabled 00:31:36.938 EAL: Heap on socket 0 was shrunk by 130MB 00:31:37.197 EAL: Trying to obtain current memory policy. 00:31:37.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:37.197 EAL: Restoring previous memory policy: 4 00:31:37.197 EAL: Calling mem event callback 'spdk:(nil)' 00:31:37.197 EAL: request: mp_malloc_sync 00:31:37.197 EAL: No shared files mode enabled, IPC is disabled 00:31:37.197 EAL: Heap on socket 0 was expanded by 258MB 00:31:37.795 EAL: Calling mem event callback 'spdk:(nil)' 00:31:37.795 EAL: request: mp_malloc_sync 00:31:37.795 EAL: No shared files mode enabled, IPC is disabled 00:31:37.795 EAL: Heap on socket 0 was shrunk by 258MB 00:31:38.054 EAL: Trying to obtain current memory policy. 00:31:38.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:38.316 EAL: Restoring previous memory policy: 4 00:31:38.316 EAL: Calling mem event callback 'spdk:(nil)' 00:31:38.316 EAL: request: mp_malloc_sync 00:31:38.316 EAL: No shared files mode enabled, IPC is disabled 00:31:38.316 EAL: Heap on socket 0 was expanded by 514MB 00:31:39.254 EAL: Calling mem event callback 'spdk:(nil)' 00:31:39.254 EAL: request: mp_malloc_sync 00:31:39.254 EAL: No shared files mode enabled, IPC is disabled 00:31:39.254 EAL: Heap on socket 0 was shrunk by 514MB 00:31:39.821 EAL: Trying to obtain current memory policy. 00:31:39.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:40.079 EAL: Restoring previous memory policy: 4 00:31:40.079 EAL: Calling mem event callback 'spdk:(nil)' 00:31:40.079 EAL: request: mp_malloc_sync 00:31:40.079 EAL: No shared files mode enabled, IPC is disabled 00:31:40.079 EAL: Heap on socket 0 was expanded by 1026MB 00:31:41.978 EAL: Calling mem event callback 'spdk:(nil)' 00:31:41.978 EAL: request: mp_malloc_sync 00:31:41.978 EAL: No shared files mode enabled, IPC is disabled 00:31:41.978 EAL: Heap on socket 0 was shrunk by 1026MB 00:31:43.357 passed 00:31:43.357 00:31:43.357 Run Summary: Type Total Ran Passed Failed Inactive 00:31:43.357 suites 1 1 n/a 0 0 00:31:43.357 tests 2 2 2 0 0 00:31:43.357 asserts 5761 5761 5761 0 n/a 00:31:43.357 00:31:43.358 Elapsed time = 7.538 seconds 00:31:43.358 EAL: Calling mem event callback 'spdk:(nil)' 00:31:43.358 EAL: request: mp_malloc_sync 00:31:43.358 EAL: No shared files mode enabled, IPC is disabled 00:31:43.358 EAL: Heap on socket 0 was shrunk by 2MB 00:31:43.358 EAL: No shared files mode enabled, IPC is disabled 00:31:43.358 EAL: No shared files mode enabled, IPC is disabled 00:31:43.358 EAL: No shared files mode enabled, IPC is disabled 00:31:43.358 00:31:43.358 real 0m7.909s 00:31:43.358 user 0m6.495s 00:31:43.358 sys 0m1.235s 00:31:43.358 05:22:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.358 05:22:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:31:43.358 ************************************ 00:31:43.358 END TEST env_vtophys 00:31:43.358 ************************************ 00:31:43.358 05:22:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:31:43.358 05:22:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:43.358 05:22:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:43.358 05:22:30 env -- common/autotest_common.sh@10 -- # set +x 00:31:43.358 ************************************ 00:31:43.358 START TEST env_pci 00:31:43.358 ************************************ 00:31:43.358 05:22:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:31:43.616 00:31:43.616 00:31:43.616 CUnit - A unit testing framework for C - Version 2.1-3 00:31:43.616 http://cunit.sourceforge.net/ 00:31:43.616 00:31:43.616 00:31:43.616 Suite: pci 00:31:43.616 Test: pci_hook ...[2024-12-09 05:22:30.359936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56673 has claimed it 00:31:43.616 passed 00:31:43.616 00:31:43.616 EAL: Cannot find device (10000:00:01.0) 00:31:43.616 EAL: Failed to attach device on primary process 00:31:43.616 Run Summary: Type Total Ran Passed Failed Inactive 00:31:43.616 suites 1 1 n/a 0 0 00:31:43.616 tests 1 1 1 0 0 00:31:43.616 asserts 25 25 25 0 n/a 00:31:43.616 00:31:43.616 Elapsed time = 0.007 seconds 00:31:43.616 00:31:43.616 real 0m0.085s 00:31:43.616 user 0m0.040s 00:31:43.616 sys 0m0.044s 00:31:43.616 05:22:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.616 05:22:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:31:43.616 ************************************ 00:31:43.616 END TEST env_pci 00:31:43.616 ************************************ 00:31:43.616 05:22:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:31:43.616 05:22:30 env -- env/env.sh@15 -- # uname 00:31:43.616 05:22:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:31:43.616 05:22:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:31:43.616 05:22:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:31:43.616 05:22:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:43.616 05:22:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:43.616 05:22:30 env -- common/autotest_common.sh@10 -- # set +x 00:31:43.616 ************************************ 00:31:43.616 START TEST env_dpdk_post_init 00:31:43.616 ************************************ 00:31:43.616 05:22:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:31:43.616 EAL: Detected CPU lcores: 10 00:31:43.616 EAL: Detected NUMA nodes: 1 00:31:43.616 EAL: Detected shared linkage of DPDK 00:31:43.616 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:31:43.616 EAL: Selected IOVA mode 'PA' 00:31:43.875 TELEMETRY: No legacy callbacks, legacy socket not created 00:31:43.875 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:31:43.875 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:31:43.875 Starting DPDK initialization... 00:31:43.875 Starting SPDK post initialization... 00:31:43.875 SPDK NVMe probe 00:31:43.875 Attaching to 0000:00:10.0 00:31:43.875 Attaching to 0000:00:11.0 00:31:43.875 Attached to 0000:00:10.0 00:31:43.875 Attached to 0000:00:11.0 00:31:43.875 Cleaning up... 00:31:43.875 00:31:43.875 real 0m0.323s 00:31:43.875 user 0m0.117s 00:31:43.875 sys 0m0.105s 00:31:43.875 05:22:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.875 05:22:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:31:43.875 ************************************ 00:31:43.875 END TEST env_dpdk_post_init 00:31:43.875 ************************************ 00:31:43.875 05:22:30 env -- env/env.sh@26 -- # uname 00:31:43.875 05:22:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:31:43.875 05:22:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:31:43.875 05:22:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:43.875 05:22:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:43.875 05:22:30 env -- common/autotest_common.sh@10 -- # set +x 00:31:43.875 ************************************ 00:31:43.875 START TEST env_mem_callbacks 00:31:43.875 ************************************ 00:31:43.875 05:22:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:31:44.133 EAL: Detected CPU lcores: 10 00:31:44.133 EAL: Detected NUMA nodes: 1 00:31:44.133 EAL: Detected shared linkage of DPDK 00:31:44.134 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:31:44.134 EAL: Selected IOVA mode 'PA' 00:31:44.134 TELEMETRY: No legacy callbacks, legacy socket not created 00:31:44.134 00:31:44.134 00:31:44.134 CUnit - A unit testing framework for C - Version 2.1-3 00:31:44.134 http://cunit.sourceforge.net/ 00:31:44.134 00:31:44.134 00:31:44.134 Suite: memory 00:31:44.134 Test: test ... 00:31:44.134 register 0x200000200000 2097152 00:31:44.134 malloc 3145728 00:31:44.134 register 0x200000400000 4194304 00:31:44.134 buf 0x2000004fffc0 len 3145728 PASSED 00:31:44.134 malloc 64 00:31:44.134 buf 0x2000004ffec0 len 64 PASSED 00:31:44.134 malloc 4194304 00:31:44.134 register 0x200000800000 6291456 00:31:44.134 buf 0x2000009fffc0 len 4194304 PASSED 00:31:44.134 free 0x2000004fffc0 3145728 00:31:44.134 free 0x2000004ffec0 64 00:31:44.134 unregister 0x200000400000 4194304 PASSED 00:31:44.134 free 0x2000009fffc0 4194304 00:31:44.134 unregister 0x200000800000 6291456 PASSED 00:31:44.134 malloc 8388608 00:31:44.134 register 0x200000400000 10485760 00:31:44.134 buf 0x2000005fffc0 len 8388608 PASSED 00:31:44.134 free 0x2000005fffc0 8388608 00:31:44.134 unregister 0x200000400000 10485760 PASSED 00:31:44.419 passed 00:31:44.419 00:31:44.419 Run Summary: Type Total Ran Passed Failed Inactive 00:31:44.419 suites 1 1 n/a 0 0 00:31:44.419 tests 1 1 1 0 0 00:31:44.419 asserts 15 15 15 0 n/a 00:31:44.419 00:31:44.419 Elapsed time = 0.073 seconds 00:31:44.419 00:31:44.419 real 0m0.295s 00:31:44.419 user 0m0.110s 00:31:44.419 sys 0m0.082s 00:31:44.419 05:22:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.419 05:22:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:31:44.419 ************************************ 00:31:44.419 END TEST env_mem_callbacks 00:31:44.419 ************************************ 00:31:44.419 00:31:44.419 real 0m9.908s 00:31:44.419 user 0m7.726s 00:31:44.419 sys 0m1.765s 00:31:44.419 05:22:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.419 05:22:31 env -- common/autotest_common.sh@10 -- # set +x 00:31:44.419 ************************************ 00:31:44.419 END TEST env 00:31:44.419 ************************************ 00:31:44.419 05:22:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:31:44.419 05:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:44.419 05:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.419 05:22:31 -- common/autotest_common.sh@10 -- # set +x 00:31:44.419 ************************************ 00:31:44.419 START TEST rpc 00:31:44.419 ************************************ 00:31:44.419 05:22:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:31:44.419 * Looking for test storage... 00:31:44.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:31:44.419 05:22:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:44.419 05:22:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:44.419 05:22:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:44.704 05:22:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:44.704 05:22:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:31:44.704 05:22:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:31:44.704 05:22:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:31:44.704 05:22:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:44.704 05:22:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:31:44.704 05:22:31 rpc -- scripts/common.sh@345 -- # : 1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:44.704 05:22:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:44.704 05:22:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@353 -- # local d=1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:44.704 05:22:31 rpc -- scripts/common.sh@355 -- # echo 1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:44.704 05:22:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@353 -- # local d=2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:44.704 05:22:31 rpc -- scripts/common.sh@355 -- # echo 2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:44.704 05:22:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:44.704 05:22:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:44.704 05:22:31 rpc -- scripts/common.sh@368 -- # return 0 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:44.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.704 --rc genhtml_branch_coverage=1 00:31:44.704 --rc genhtml_function_coverage=1 00:31:44.704 --rc genhtml_legend=1 00:31:44.704 --rc geninfo_all_blocks=1 00:31:44.704 --rc geninfo_unexecuted_blocks=1 00:31:44.704 00:31:44.704 ' 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:44.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.704 --rc genhtml_branch_coverage=1 00:31:44.704 --rc genhtml_function_coverage=1 00:31:44.704 --rc genhtml_legend=1 00:31:44.704 --rc geninfo_all_blocks=1 00:31:44.704 --rc geninfo_unexecuted_blocks=1 00:31:44.704 00:31:44.704 ' 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:44.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.704 --rc genhtml_branch_coverage=1 00:31:44.704 --rc genhtml_function_coverage=1 00:31:44.704 --rc genhtml_legend=1 00:31:44.704 --rc geninfo_all_blocks=1 00:31:44.704 --rc geninfo_unexecuted_blocks=1 00:31:44.704 00:31:44.704 ' 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:44.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.704 --rc genhtml_branch_coverage=1 00:31:44.704 --rc genhtml_function_coverage=1 00:31:44.704 --rc genhtml_legend=1 00:31:44.704 --rc geninfo_all_blocks=1 00:31:44.704 --rc geninfo_unexecuted_blocks=1 00:31:44.704 00:31:44.704 ' 00:31:44.704 05:22:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56800 00:31:44.704 05:22:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:44.704 05:22:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56800 00:31:44.704 05:22:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 56800 ']' 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.704 05:22:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:44.704 [2024-12-09 05:22:31.583510] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:31:44.705 [2024-12-09 05:22:31.584280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56800 ] 00:31:44.963 [2024-12-09 05:22:31.780749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.221 [2024-12-09 05:22:31.941416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:31:45.221 [2024-12-09 05:22:31.941525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56800' to capture a snapshot of events at runtime. 00:31:45.221 [2024-12-09 05:22:31.941542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.221 [2024-12-09 05:22:31.941556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.221 [2024-12-09 05:22:31.941568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56800 for offline analysis/debug. 00:31:45.221 [2024-12-09 05:22:31.943095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.160 05:22:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.160 05:22:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:31:46.160 05:22:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:31:46.160 05:22:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:31:46.160 05:22:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:31:46.160 05:22:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:31:46.160 05:22:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:46.160 05:22:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.160 05:22:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:46.160 ************************************ 00:31:46.160 START TEST rpc_integrity 00:31:46.160 ************************************ 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.160 05:22:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:31:46.160 { 00:31:46.160 "name": "Malloc0", 00:31:46.160 "aliases": [ 00:31:46.160 "c534b7bf-5090-4d1e-97e6-3d4fe6893896" 00:31:46.160 ], 00:31:46.160 "product_name": "Malloc disk", 00:31:46.160 "block_size": 512, 00:31:46.160 "num_blocks": 16384, 00:31:46.160 "uuid": "c534b7bf-5090-4d1e-97e6-3d4fe6893896", 00:31:46.160 "assigned_rate_limits": { 00:31:46.160 "rw_ios_per_sec": 0, 00:31:46.160 "rw_mbytes_per_sec": 0, 00:31:46.160 "r_mbytes_per_sec": 0, 00:31:46.160 "w_mbytes_per_sec": 0 00:31:46.160 }, 00:31:46.160 "claimed": false, 00:31:46.160 "zoned": false, 00:31:46.160 "supported_io_types": { 00:31:46.160 "read": true, 00:31:46.160 "write": true, 00:31:46.160 "unmap": true, 00:31:46.160 "flush": true, 00:31:46.160 "reset": true, 00:31:46.160 "nvme_admin": false, 00:31:46.160 "nvme_io": false, 00:31:46.160 "nvme_io_md": false, 00:31:46.160 "write_zeroes": true, 00:31:46.160 "zcopy": true, 00:31:46.160 "get_zone_info": false, 00:31:46.160 "zone_management": false, 00:31:46.160 "zone_append": false, 00:31:46.160 "compare": false, 00:31:46.160 "compare_and_write": false, 00:31:46.160 "abort": true, 00:31:46.160 "seek_hole": false, 00:31:46.160 "seek_data": false, 00:31:46.160 "copy": true, 00:31:46.160 "nvme_iov_md": false 00:31:46.160 }, 00:31:46.160 "memory_domains": [ 00:31:46.160 { 00:31:46.160 "dma_device_id": "system", 00:31:46.160 "dma_device_type": 1 00:31:46.160 }, 00:31:46.160 { 00:31:46.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.160 "dma_device_type": 2 00:31:46.160 } 00:31:46.160 ], 00:31:46.160 "driver_specific": {} 00:31:46.160 } 00:31:46.160 ]' 00:31:46.160 05:22:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:31:46.160 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:31:46.160 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:31:46.160 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.160 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.160 [2024-12-09 05:22:33.018481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:31:46.160 [2024-12-09 05:22:33.018576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.160 [2024-12-09 05:22:33.018613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:46.160 [2024-12-09 05:22:33.018633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.160 [2024-12-09 05:22:33.021885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.160 [2024-12-09 05:22:33.021938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:31:46.160 Passthru0 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.161 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.161 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:31:46.161 { 00:31:46.161 "name": "Malloc0", 00:31:46.161 "aliases": [ 00:31:46.161 "c534b7bf-5090-4d1e-97e6-3d4fe6893896" 00:31:46.161 ], 00:31:46.161 "product_name": "Malloc disk", 00:31:46.161 "block_size": 512, 00:31:46.161 "num_blocks": 16384, 00:31:46.161 "uuid": "c534b7bf-5090-4d1e-97e6-3d4fe6893896", 00:31:46.161 "assigned_rate_limits": { 00:31:46.161 "rw_ios_per_sec": 0, 00:31:46.161 "rw_mbytes_per_sec": 0, 00:31:46.161 "r_mbytes_per_sec": 0, 00:31:46.161 "w_mbytes_per_sec": 0 00:31:46.161 }, 00:31:46.161 "claimed": true, 00:31:46.161 "claim_type": "exclusive_write", 00:31:46.161 "zoned": false, 00:31:46.161 "supported_io_types": { 00:31:46.161 "read": true, 00:31:46.161 "write": true, 00:31:46.161 "unmap": true, 00:31:46.161 "flush": true, 00:31:46.161 "reset": true, 00:31:46.161 "nvme_admin": false, 00:31:46.161 "nvme_io": false, 00:31:46.161 "nvme_io_md": false, 00:31:46.161 "write_zeroes": true, 00:31:46.161 "zcopy": true, 00:31:46.161 "get_zone_info": false, 00:31:46.161 "zone_management": false, 00:31:46.161 "zone_append": false, 00:31:46.161 "compare": false, 00:31:46.161 "compare_and_write": false, 00:31:46.161 "abort": true, 00:31:46.161 "seek_hole": false, 00:31:46.161 "seek_data": false, 00:31:46.161 "copy": true, 00:31:46.161 "nvme_iov_md": false 00:31:46.161 }, 00:31:46.161 "memory_domains": [ 00:31:46.161 { 00:31:46.161 "dma_device_id": "system", 00:31:46.161 "dma_device_type": 1 00:31:46.161 }, 00:31:46.161 { 00:31:46.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.161 "dma_device_type": 2 00:31:46.161 } 00:31:46.161 ], 00:31:46.161 "driver_specific": {} 00:31:46.161 }, 00:31:46.161 { 00:31:46.161 "name": "Passthru0", 00:31:46.161 "aliases": [ 00:31:46.161 "c7dd445c-6264-5c69-9973-d8ee65a074cc" 00:31:46.161 ], 00:31:46.161 "product_name": "passthru", 00:31:46.161 "block_size": 512, 00:31:46.161 "num_blocks": 16384, 00:31:46.161 "uuid": "c7dd445c-6264-5c69-9973-d8ee65a074cc", 00:31:46.161 "assigned_rate_limits": { 00:31:46.161 "rw_ios_per_sec": 0, 00:31:46.161 "rw_mbytes_per_sec": 0, 00:31:46.161 "r_mbytes_per_sec": 0, 00:31:46.161 "w_mbytes_per_sec": 0 00:31:46.161 }, 00:31:46.161 "claimed": false, 00:31:46.161 "zoned": false, 00:31:46.161 "supported_io_types": { 00:31:46.161 "read": true, 00:31:46.161 "write": true, 00:31:46.161 "unmap": true, 00:31:46.161 "flush": true, 00:31:46.161 "reset": true, 00:31:46.161 "nvme_admin": false, 00:31:46.161 "nvme_io": false, 00:31:46.161 "nvme_io_md": false, 00:31:46.161 "write_zeroes": true, 00:31:46.161 "zcopy": true, 00:31:46.161 "get_zone_info": false, 00:31:46.161 "zone_management": false, 00:31:46.161 "zone_append": false, 00:31:46.161 "compare": false, 00:31:46.161 "compare_and_write": false, 00:31:46.161 "abort": true, 00:31:46.161 "seek_hole": false, 00:31:46.161 "seek_data": false, 00:31:46.161 "copy": true, 00:31:46.161 "nvme_iov_md": false 00:31:46.161 }, 00:31:46.161 "memory_domains": [ 00:31:46.161 { 00:31:46.161 "dma_device_id": "system", 00:31:46.161 "dma_device_type": 1 00:31:46.161 }, 00:31:46.161 { 00:31:46.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.161 "dma_device_type": 2 00:31:46.161 } 00:31:46.161 ], 00:31:46.161 "driver_specific": { 00:31:46.161 "passthru": { 00:31:46.161 "name": "Passthru0", 00:31:46.161 "base_bdev_name": "Malloc0" 00:31:46.161 } 00:31:46.161 } 00:31:46.161 } 00:31:46.161 ]' 00:31:46.161 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:31:46.161 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:31:46.161 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.161 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.161 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.421 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:31:46.421 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.421 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.421 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:31:46.421 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:31:46.421 05:22:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:31:46.421 00:31:46.421 real 0m0.348s 00:31:46.421 user 0m0.217s 00:31:46.421 sys 0m0.039s 00:31:46.421 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.421 05:22:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 ************************************ 00:31:46.421 END TEST rpc_integrity 00:31:46.421 ************************************ 00:31:46.421 05:22:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:31:46.421 05:22:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:46.421 05:22:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.421 05:22:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 ************************************ 00:31:46.421 START TEST rpc_plugins 00:31:46.421 ************************************ 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:31:46.421 { 00:31:46.421 "name": "Malloc1", 00:31:46.421 "aliases": [ 00:31:46.421 "aab9b8ee-bf97-4baa-951d-1db87cd02458" 00:31:46.421 ], 00:31:46.421 "product_name": "Malloc disk", 00:31:46.421 "block_size": 4096, 00:31:46.421 "num_blocks": 256, 00:31:46.421 "uuid": "aab9b8ee-bf97-4baa-951d-1db87cd02458", 00:31:46.421 "assigned_rate_limits": { 00:31:46.421 "rw_ios_per_sec": 0, 00:31:46.421 "rw_mbytes_per_sec": 0, 00:31:46.421 "r_mbytes_per_sec": 0, 00:31:46.421 "w_mbytes_per_sec": 0 00:31:46.421 }, 00:31:46.421 "claimed": false, 00:31:46.421 "zoned": false, 00:31:46.421 "supported_io_types": { 00:31:46.421 "read": true, 00:31:46.421 "write": true, 00:31:46.421 "unmap": true, 00:31:46.421 "flush": true, 00:31:46.421 "reset": true, 00:31:46.421 "nvme_admin": false, 00:31:46.421 "nvme_io": false, 00:31:46.421 "nvme_io_md": false, 00:31:46.421 "write_zeroes": true, 00:31:46.421 "zcopy": true, 00:31:46.421 "get_zone_info": false, 00:31:46.421 "zone_management": false, 00:31:46.421 "zone_append": false, 00:31:46.421 "compare": false, 00:31:46.421 "compare_and_write": false, 00:31:46.421 "abort": true, 00:31:46.421 "seek_hole": false, 00:31:46.421 "seek_data": false, 00:31:46.421 "copy": true, 00:31:46.421 "nvme_iov_md": false 00:31:46.421 }, 00:31:46.421 "memory_domains": [ 00:31:46.421 { 00:31:46.421 "dma_device_id": "system", 00:31:46.421 "dma_device_type": 1 00:31:46.421 }, 00:31:46.421 { 00:31:46.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.421 "dma_device_type": 2 00:31:46.421 } 00:31:46.421 ], 00:31:46.421 "driver_specific": {} 00:31:46.421 } 00:31:46.421 ]' 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:46.421 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:31:46.421 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:31:46.699 05:22:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:31:46.699 00:31:46.699 real 0m0.168s 00:31:46.699 user 0m0.096s 00:31:46.699 sys 0m0.028s 00:31:46.699 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.699 ************************************ 00:31:46.699 END TEST rpc_plugins 00:31:46.699 ************************************ 00:31:46.699 05:22:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 05:22:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:31:46.699 05:22:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:46.699 05:22:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.699 05:22:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 ************************************ 00:31:46.699 START TEST rpc_trace_cmd_test 00:31:46.699 ************************************ 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:31:46.699 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56800", 00:31:46.699 "tpoint_group_mask": "0x8", 00:31:46.699 "iscsi_conn": { 00:31:46.699 "mask": "0x2", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "scsi": { 00:31:46.699 "mask": "0x4", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "bdev": { 00:31:46.699 "mask": "0x8", 00:31:46.699 "tpoint_mask": "0xffffffffffffffff" 00:31:46.699 }, 00:31:46.699 "nvmf_rdma": { 00:31:46.699 "mask": "0x10", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "nvmf_tcp": { 00:31:46.699 "mask": "0x20", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "ftl": { 00:31:46.699 "mask": "0x40", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "blobfs": { 00:31:46.699 "mask": "0x80", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "dsa": { 00:31:46.699 "mask": "0x200", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "thread": { 00:31:46.699 "mask": "0x400", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "nvme_pcie": { 00:31:46.699 "mask": "0x800", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "iaa": { 00:31:46.699 "mask": "0x1000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "nvme_tcp": { 00:31:46.699 "mask": "0x2000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "bdev_nvme": { 00:31:46.699 "mask": "0x4000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "sock": { 00:31:46.699 "mask": "0x8000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "blob": { 00:31:46.699 "mask": "0x10000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "bdev_raid": { 00:31:46.699 "mask": "0x20000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 }, 00:31:46.699 "scheduler": { 00:31:46.699 "mask": "0x40000", 00:31:46.699 "tpoint_mask": "0x0" 00:31:46.699 } 00:31:46.699 }' 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:31:46.699 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:31:46.958 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:31:46.958 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:31:46.958 05:22:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:31:46.958 00:31:46.958 real 0m0.281s 00:31:46.958 user 0m0.244s 00:31:46.958 sys 0m0.030s 00:31:46.958 05:22:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.958 05:22:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 ************************************ 00:31:46.959 END TEST rpc_trace_cmd_test 00:31:46.959 ************************************ 00:31:46.959 05:22:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:31:46.959 05:22:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:31:46.959 05:22:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:31:46.959 05:22:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:46.959 05:22:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.959 05:22:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 ************************************ 00:31:46.959 START TEST rpc_daemon_integrity 00:31:46.959 ************************************ 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.959 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 05:22:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.218 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:31:47.218 { 00:31:47.218 "name": "Malloc2", 00:31:47.218 "aliases": [ 00:31:47.218 "ef0ac492-ca18-4fcd-bfbc-97c6683c7f90" 00:31:47.218 ], 00:31:47.218 "product_name": "Malloc disk", 00:31:47.218 "block_size": 512, 00:31:47.218 "num_blocks": 16384, 00:31:47.218 "uuid": "ef0ac492-ca18-4fcd-bfbc-97c6683c7f90", 00:31:47.218 "assigned_rate_limits": { 00:31:47.218 "rw_ios_per_sec": 0, 00:31:47.218 "rw_mbytes_per_sec": 0, 00:31:47.218 "r_mbytes_per_sec": 0, 00:31:47.218 "w_mbytes_per_sec": 0 00:31:47.218 }, 00:31:47.218 "claimed": false, 00:31:47.218 "zoned": false, 00:31:47.218 "supported_io_types": { 00:31:47.218 "read": true, 00:31:47.218 "write": true, 00:31:47.218 "unmap": true, 00:31:47.218 "flush": true, 00:31:47.218 "reset": true, 00:31:47.218 "nvme_admin": false, 00:31:47.218 "nvme_io": false, 00:31:47.218 "nvme_io_md": false, 00:31:47.218 "write_zeroes": true, 00:31:47.218 "zcopy": true, 00:31:47.218 "get_zone_info": false, 00:31:47.218 "zone_management": false, 00:31:47.218 "zone_append": false, 00:31:47.218 "compare": false, 00:31:47.218 "compare_and_write": false, 00:31:47.218 "abort": true, 00:31:47.218 "seek_hole": false, 00:31:47.218 "seek_data": false, 00:31:47.218 "copy": true, 00:31:47.218 "nvme_iov_md": false 00:31:47.218 }, 00:31:47.218 "memory_domains": [ 00:31:47.218 { 00:31:47.218 "dma_device_id": "system", 00:31:47.218 "dma_device_type": 1 00:31:47.218 }, 00:31:47.218 { 00:31:47.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.218 "dma_device_type": 2 00:31:47.218 } 00:31:47.218 ], 00:31:47.218 "driver_specific": {} 00:31:47.218 } 00:31:47.218 ]' 00:31:47.218 05:22:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 [2024-12-09 05:22:34.008479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:31:47.218 [2024-12-09 05:22:34.008573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:47.218 [2024-12-09 05:22:34.008601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:47.218 [2024-12-09 05:22:34.008618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:47.218 [2024-12-09 05:22:34.012005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:47.218 [2024-12-09 05:22:34.012095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:31:47.218 Passthru0 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:31:47.218 { 00:31:47.218 "name": "Malloc2", 00:31:47.218 "aliases": [ 00:31:47.218 "ef0ac492-ca18-4fcd-bfbc-97c6683c7f90" 00:31:47.218 ], 00:31:47.218 "product_name": "Malloc disk", 00:31:47.218 "block_size": 512, 00:31:47.218 "num_blocks": 16384, 00:31:47.218 "uuid": "ef0ac492-ca18-4fcd-bfbc-97c6683c7f90", 00:31:47.218 "assigned_rate_limits": { 00:31:47.218 "rw_ios_per_sec": 0, 00:31:47.218 "rw_mbytes_per_sec": 0, 00:31:47.218 "r_mbytes_per_sec": 0, 00:31:47.218 "w_mbytes_per_sec": 0 00:31:47.218 }, 00:31:47.218 "claimed": true, 00:31:47.218 "claim_type": "exclusive_write", 00:31:47.218 "zoned": false, 00:31:47.218 "supported_io_types": { 00:31:47.218 "read": true, 00:31:47.218 "write": true, 00:31:47.218 "unmap": true, 00:31:47.218 "flush": true, 00:31:47.218 "reset": true, 00:31:47.218 "nvme_admin": false, 00:31:47.218 "nvme_io": false, 00:31:47.218 "nvme_io_md": false, 00:31:47.218 "write_zeroes": true, 00:31:47.218 "zcopy": true, 00:31:47.218 "get_zone_info": false, 00:31:47.218 "zone_management": false, 00:31:47.218 "zone_append": false, 00:31:47.218 "compare": false, 00:31:47.218 "compare_and_write": false, 00:31:47.218 "abort": true, 00:31:47.218 "seek_hole": false, 00:31:47.218 "seek_data": false, 00:31:47.218 "copy": true, 00:31:47.218 "nvme_iov_md": false 00:31:47.218 }, 00:31:47.218 "memory_domains": [ 00:31:47.218 { 00:31:47.218 "dma_device_id": "system", 00:31:47.218 "dma_device_type": 1 00:31:47.218 }, 00:31:47.218 { 00:31:47.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.218 "dma_device_type": 2 00:31:47.218 } 00:31:47.218 ], 00:31:47.218 "driver_specific": {} 00:31:47.218 }, 00:31:47.218 { 00:31:47.218 "name": "Passthru0", 00:31:47.218 "aliases": [ 00:31:47.218 "31875ddf-8f0d-5bd7-bbc2-f54a799243ae" 00:31:47.218 ], 00:31:47.218 "product_name": "passthru", 00:31:47.218 "block_size": 512, 00:31:47.218 "num_blocks": 16384, 00:31:47.218 "uuid": "31875ddf-8f0d-5bd7-bbc2-f54a799243ae", 00:31:47.218 "assigned_rate_limits": { 00:31:47.218 "rw_ios_per_sec": 0, 00:31:47.218 "rw_mbytes_per_sec": 0, 00:31:47.218 "r_mbytes_per_sec": 0, 00:31:47.218 "w_mbytes_per_sec": 0 00:31:47.218 }, 00:31:47.218 "claimed": false, 00:31:47.218 "zoned": false, 00:31:47.218 "supported_io_types": { 00:31:47.218 "read": true, 00:31:47.218 "write": true, 00:31:47.218 "unmap": true, 00:31:47.218 "flush": true, 00:31:47.218 "reset": true, 00:31:47.218 "nvme_admin": false, 00:31:47.218 "nvme_io": false, 00:31:47.218 "nvme_io_md": false, 00:31:47.218 "write_zeroes": true, 00:31:47.218 "zcopy": true, 00:31:47.218 "get_zone_info": false, 00:31:47.218 "zone_management": false, 00:31:47.218 "zone_append": false, 00:31:47.218 "compare": false, 00:31:47.218 "compare_and_write": false, 00:31:47.218 "abort": true, 00:31:47.218 "seek_hole": false, 00:31:47.218 "seek_data": false, 00:31:47.218 "copy": true, 00:31:47.218 "nvme_iov_md": false 00:31:47.218 }, 00:31:47.218 "memory_domains": [ 00:31:47.218 { 00:31:47.218 "dma_device_id": "system", 00:31:47.218 "dma_device_type": 1 00:31:47.218 }, 00:31:47.218 { 00:31:47.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.218 "dma_device_type": 2 00:31:47.218 } 00:31:47.218 ], 00:31:47.218 "driver_specific": { 00:31:47.218 "passthru": { 00:31:47.218 "name": "Passthru0", 00:31:47.218 "base_bdev_name": "Malloc2" 00:31:47.218 } 00:31:47.218 } 00:31:47.218 } 00:31:47.218 ]' 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:31:47.218 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:31:47.477 05:22:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:31:47.477 00:31:47.477 real 0m0.379s 00:31:47.477 user 0m0.231s 00:31:47.477 sys 0m0.038s 00:31:47.477 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.477 ************************************ 00:31:47.477 END TEST rpc_daemon_integrity 00:31:47.477 ************************************ 00:31:47.477 05:22:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:47.477 05:22:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:47.477 05:22:34 rpc -- rpc/rpc.sh@84 -- # killprocess 56800 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 56800 ']' 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@958 -- # kill -0 56800 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@959 -- # uname 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56800 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.477 killing process with pid 56800 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56800' 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@973 -- # kill 56800 00:31:47.477 05:22:34 rpc -- common/autotest_common.sh@978 -- # wait 56800 00:31:50.041 00:31:50.041 real 0m5.233s 00:31:50.041 user 0m5.835s 00:31:50.041 sys 0m1.018s 00:31:50.041 05:22:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.041 05:22:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:50.041 ************************************ 00:31:50.041 END TEST rpc 00:31:50.041 ************************************ 00:31:50.041 05:22:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:31:50.041 05:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:50.041 05:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.041 05:22:36 -- common/autotest_common.sh@10 -- # set +x 00:31:50.041 ************************************ 00:31:50.041 START TEST skip_rpc 00:31:50.041 ************************************ 00:31:50.041 05:22:36 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:31:50.041 * Looking for test storage... 00:31:50.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:31:50.041 05:22:36 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.042 05:22:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.042 --rc genhtml_branch_coverage=1 00:31:50.042 --rc genhtml_function_coverage=1 00:31:50.042 --rc genhtml_legend=1 00:31:50.042 --rc geninfo_all_blocks=1 00:31:50.042 --rc geninfo_unexecuted_blocks=1 00:31:50.042 00:31:50.042 ' 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.042 --rc genhtml_branch_coverage=1 00:31:50.042 --rc genhtml_function_coverage=1 00:31:50.042 --rc genhtml_legend=1 00:31:50.042 --rc geninfo_all_blocks=1 00:31:50.042 --rc geninfo_unexecuted_blocks=1 00:31:50.042 00:31:50.042 ' 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.042 --rc genhtml_branch_coverage=1 00:31:50.042 --rc genhtml_function_coverage=1 00:31:50.042 --rc genhtml_legend=1 00:31:50.042 --rc geninfo_all_blocks=1 00:31:50.042 --rc geninfo_unexecuted_blocks=1 00:31:50.042 00:31:50.042 ' 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.042 --rc genhtml_branch_coverage=1 00:31:50.042 --rc genhtml_function_coverage=1 00:31:50.042 --rc genhtml_legend=1 00:31:50.042 --rc geninfo_all_blocks=1 00:31:50.042 --rc geninfo_unexecuted_blocks=1 00:31:50.042 00:31:50.042 ' 00:31:50.042 05:22:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:31:50.042 05:22:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:31:50.042 05:22:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.042 05:22:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:50.042 ************************************ 00:31:50.042 START TEST skip_rpc 00:31:50.042 ************************************ 00:31:50.042 05:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:31:50.042 05:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57035 00:31:50.042 05:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:50.042 05:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:31:50.042 05:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:31:50.042 [2024-12-09 05:22:36.850644] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:31:50.042 [2024-12-09 05:22:36.850856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57035 ] 00:31:50.300 [2024-12-09 05:22:37.039368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.300 [2024-12-09 05:22:37.186361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.583 05:22:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57035 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57035 ']' 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57035 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57035 00:31:55.584 killing process with pid 57035 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57035' 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57035 00:31:55.584 05:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57035 00:31:57.527 ************************************ 00:31:57.527 END TEST skip_rpc 00:31:57.527 ************************************ 00:31:57.527 00:31:57.527 real 0m7.335s 00:31:57.527 user 0m6.658s 00:31:57.527 sys 0m0.569s 00:31:57.527 05:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.527 05:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.527 05:22:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:31:57.527 05:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.527 05:22:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.527 05:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.527 ************************************ 00:31:57.527 START TEST skip_rpc_with_json 00:31:57.527 ************************************ 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:31:57.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57139 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57139 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57139 ']' 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.527 05:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:57.527 [2024-12-09 05:22:44.221015] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:31:57.527 [2024-12-09 05:22:44.221182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57139 ] 00:31:57.527 [2024-12-09 05:22:44.401216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.823 [2024-12-09 05:22:44.556125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 [2024-12-09 05:22:45.441416] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:31:58.755 request: 00:31:58.755 { 00:31:58.755 "trtype": "tcp", 00:31:58.755 "method": "nvmf_get_transports", 00:31:58.755 "req_id": 1 00:31:58.755 } 00:31:58.755 Got JSON-RPC error response 00:31:58.755 response: 00:31:58.755 { 00:31:58.755 "code": -19, 00:31:58.755 "message": "No such device" 00:31:58.755 } 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 [2024-12-09 05:22:45.453565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.755 05:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:31:58.755 { 00:31:58.755 "subsystems": [ 00:31:58.755 { 00:31:58.755 "subsystem": "fsdev", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "fsdev_set_opts", 00:31:58.755 "params": { 00:31:58.755 "fsdev_io_pool_size": 65535, 00:31:58.755 "fsdev_io_cache_size": 256 00:31:58.755 } 00:31:58.755 } 00:31:58.755 ] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "keyring", 00:31:58.755 "config": [] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "iobuf", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "iobuf_set_options", 00:31:58.755 "params": { 00:31:58.755 "small_pool_count": 8192, 00:31:58.755 "large_pool_count": 1024, 00:31:58.755 "small_bufsize": 8192, 00:31:58.755 "large_bufsize": 135168, 00:31:58.755 "enable_numa": false 00:31:58.755 } 00:31:58.755 } 00:31:58.755 ] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "sock", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "sock_set_default_impl", 00:31:58.755 "params": { 00:31:58.755 "impl_name": "posix" 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "sock_impl_set_options", 00:31:58.755 "params": { 00:31:58.755 "impl_name": "ssl", 00:31:58.755 "recv_buf_size": 4096, 00:31:58.755 "send_buf_size": 4096, 00:31:58.755 "enable_recv_pipe": true, 00:31:58.755 "enable_quickack": false, 00:31:58.755 "enable_placement_id": 0, 00:31:58.755 "enable_zerocopy_send_server": true, 00:31:58.755 "enable_zerocopy_send_client": false, 00:31:58.755 "zerocopy_threshold": 0, 00:31:58.755 "tls_version": 0, 00:31:58.755 "enable_ktls": false 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "sock_impl_set_options", 00:31:58.755 "params": { 00:31:58.755 "impl_name": "posix", 00:31:58.755 "recv_buf_size": 2097152, 00:31:58.755 "send_buf_size": 2097152, 00:31:58.755 "enable_recv_pipe": true, 00:31:58.755 "enable_quickack": false, 00:31:58.755 "enable_placement_id": 0, 00:31:58.755 "enable_zerocopy_send_server": true, 00:31:58.755 "enable_zerocopy_send_client": false, 00:31:58.755 "zerocopy_threshold": 0, 00:31:58.755 "tls_version": 0, 00:31:58.755 "enable_ktls": false 00:31:58.755 } 00:31:58.755 } 00:31:58.755 ] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "vmd", 00:31:58.755 "config": [] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "accel", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "accel_set_options", 00:31:58.755 "params": { 00:31:58.755 "small_cache_size": 128, 00:31:58.755 "large_cache_size": 16, 00:31:58.755 "task_count": 2048, 00:31:58.755 "sequence_count": 2048, 00:31:58.755 "buf_count": 2048 00:31:58.755 } 00:31:58.755 } 00:31:58.755 ] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "bdev", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "bdev_set_options", 00:31:58.755 "params": { 00:31:58.755 "bdev_io_pool_size": 65535, 00:31:58.755 "bdev_io_cache_size": 256, 00:31:58.755 "bdev_auto_examine": true, 00:31:58.755 "iobuf_small_cache_size": 128, 00:31:58.755 "iobuf_large_cache_size": 16 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "bdev_raid_set_options", 00:31:58.755 "params": { 00:31:58.755 "process_window_size_kb": 1024, 00:31:58.755 "process_max_bandwidth_mb_sec": 0 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "bdev_iscsi_set_options", 00:31:58.755 "params": { 00:31:58.755 "timeout_sec": 30 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "bdev_nvme_set_options", 00:31:58.755 "params": { 00:31:58.755 "action_on_timeout": "none", 00:31:58.755 "timeout_us": 0, 00:31:58.755 "timeout_admin_us": 0, 00:31:58.755 "keep_alive_timeout_ms": 10000, 00:31:58.755 "arbitration_burst": 0, 00:31:58.755 "low_priority_weight": 0, 00:31:58.755 "medium_priority_weight": 0, 00:31:58.755 "high_priority_weight": 0, 00:31:58.755 "nvme_adminq_poll_period_us": 10000, 00:31:58.755 "nvme_ioq_poll_period_us": 0, 00:31:58.755 "io_queue_requests": 0, 00:31:58.755 "delay_cmd_submit": true, 00:31:58.755 "transport_retry_count": 4, 00:31:58.755 "bdev_retry_count": 3, 00:31:58.755 "transport_ack_timeout": 0, 00:31:58.755 "ctrlr_loss_timeout_sec": 0, 00:31:58.755 "reconnect_delay_sec": 0, 00:31:58.755 "fast_io_fail_timeout_sec": 0, 00:31:58.755 "disable_auto_failback": false, 00:31:58.755 "generate_uuids": false, 00:31:58.755 "transport_tos": 0, 00:31:58.755 "nvme_error_stat": false, 00:31:58.755 "rdma_srq_size": 0, 00:31:58.755 "io_path_stat": false, 00:31:58.755 "allow_accel_sequence": false, 00:31:58.755 "rdma_max_cq_size": 0, 00:31:58.755 "rdma_cm_event_timeout_ms": 0, 00:31:58.755 "dhchap_digests": [ 00:31:58.755 "sha256", 00:31:58.755 "sha384", 00:31:58.755 "sha512" 00:31:58.755 ], 00:31:58.755 "dhchap_dhgroups": [ 00:31:58.755 "null", 00:31:58.755 "ffdhe2048", 00:31:58.755 "ffdhe3072", 00:31:58.755 "ffdhe4096", 00:31:58.755 "ffdhe6144", 00:31:58.755 "ffdhe8192" 00:31:58.755 ] 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "bdev_nvme_set_hotplug", 00:31:58.755 "params": { 00:31:58.755 "period_us": 100000, 00:31:58.755 "enable": false 00:31:58.755 } 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "method": "bdev_wait_for_examine" 00:31:58.755 } 00:31:58.755 ] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "scsi", 00:31:58.755 "config": null 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "scheduler", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "framework_set_scheduler", 00:31:58.755 "params": { 00:31:58.755 "name": "static" 00:31:58.755 } 00:31:58.755 } 00:31:58.755 ] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "vhost_scsi", 00:31:58.755 "config": [] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "vhost_blk", 00:31:58.755 "config": [] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "ublk", 00:31:58.755 "config": [] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "nbd", 00:31:58.755 "config": [] 00:31:58.755 }, 00:31:58.755 { 00:31:58.755 "subsystem": "nvmf", 00:31:58.755 "config": [ 00:31:58.755 { 00:31:58.755 "method": "nvmf_set_config", 00:31:58.755 "params": { 00:31:58.755 "discovery_filter": "match_any", 00:31:58.755 "admin_cmd_passthru": { 00:31:58.755 "identify_ctrlr": false 00:31:58.755 }, 00:31:58.755 "dhchap_digests": [ 00:31:58.755 "sha256", 00:31:58.755 "sha384", 00:31:58.755 "sha512" 00:31:58.755 ], 00:31:58.755 "dhchap_dhgroups": [ 00:31:58.755 "null", 00:31:58.755 "ffdhe2048", 00:31:58.755 "ffdhe3072", 00:31:58.755 "ffdhe4096", 00:31:58.755 "ffdhe6144", 00:31:58.755 "ffdhe8192" 00:31:58.755 ] 00:31:58.755 } 00:31:58.756 }, 00:31:58.756 { 00:31:58.756 "method": "nvmf_set_max_subsystems", 00:31:58.756 "params": { 00:31:58.756 "max_subsystems": 1024 00:31:58.756 } 00:31:58.756 }, 00:31:58.756 { 00:31:58.756 "method": "nvmf_set_crdt", 00:31:58.756 "params": { 00:31:58.756 "crdt1": 0, 00:31:58.756 "crdt2": 0, 00:31:58.756 "crdt3": 0 00:31:58.756 } 00:31:58.756 }, 00:31:58.756 { 00:31:58.756 "method": "nvmf_create_transport", 00:31:58.756 "params": { 00:31:58.756 "trtype": "TCP", 00:31:58.756 "max_queue_depth": 128, 00:31:58.756 "max_io_qpairs_per_ctrlr": 127, 00:31:58.756 "in_capsule_data_size": 4096, 00:31:58.756 "max_io_size": 131072, 00:31:58.756 "io_unit_size": 131072, 00:31:58.756 "max_aq_depth": 128, 00:31:58.756 "num_shared_buffers": 511, 00:31:58.756 "buf_cache_size": 4294967295, 00:31:58.756 "dif_insert_or_strip": false, 00:31:58.756 "zcopy": false, 00:31:58.756 "c2h_success": true, 00:31:58.756 "sock_priority": 0, 00:31:58.756 "abort_timeout_sec": 1, 00:31:58.756 "ack_timeout": 0, 00:31:58.756 "data_wr_pool_size": 0 00:31:58.756 } 00:31:58.756 } 00:31:58.756 ] 00:31:58.756 }, 00:31:58.756 { 00:31:58.756 "subsystem": "iscsi", 00:31:58.756 "config": [ 00:31:58.756 { 00:31:58.756 "method": "iscsi_set_options", 00:31:58.756 "params": { 00:31:58.756 "node_base": "iqn.2016-06.io.spdk", 00:31:58.756 "max_sessions": 128, 00:31:58.756 "max_connections_per_session": 2, 00:31:58.756 "max_queue_depth": 64, 00:31:58.756 "default_time2wait": 2, 00:31:58.756 "default_time2retain": 20, 00:31:58.756 "first_burst_length": 8192, 00:31:58.756 "immediate_data": true, 00:31:58.756 "allow_duplicated_isid": false, 00:31:58.756 "error_recovery_level": 0, 00:31:58.756 "nop_timeout": 60, 00:31:58.756 "nop_in_interval": 30, 00:31:58.756 "disable_chap": false, 00:31:58.756 "require_chap": false, 00:31:58.756 "mutual_chap": false, 00:31:58.756 "chap_group": 0, 00:31:58.756 "max_large_datain_per_connection": 64, 00:31:58.756 "max_r2t_per_connection": 4, 00:31:58.756 "pdu_pool_size": 36864, 00:31:58.756 "immediate_data_pool_size": 16384, 00:31:58.756 "data_out_pool_size": 2048 00:31:58.756 } 00:31:58.756 } 00:31:58.756 ] 00:31:58.756 } 00:31:58.756 ] 00:31:58.756 } 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57139 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57139 ']' 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57139 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57139 00:31:58.756 killing process with pid 57139 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57139' 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57139 00:31:58.756 05:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57139 00:32:01.286 05:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57195 00:32:01.286 05:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:32:01.286 05:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:32:06.592 05:22:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57195 00:32:06.592 05:22:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57195 ']' 00:32:06.592 05:22:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57195 00:32:06.592 05:22:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:32:06.592 05:22:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.592 05:22:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57195 00:32:06.592 killing process with pid 57195 00:32:06.592 05:22:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.592 05:22:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.592 05:22:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57195' 00:32:06.592 05:22:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57195 00:32:06.592 05:22:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57195 00:32:08.490 05:22:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:32:08.490 05:22:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:32:08.747 ************************************ 00:32:08.747 END TEST skip_rpc_with_json 00:32:08.747 ************************************ 00:32:08.747 00:32:08.747 real 0m11.356s 00:32:08.747 user 0m10.541s 00:32:08.747 sys 0m1.257s 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:08.747 05:22:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:32:08.747 05:22:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:08.747 05:22:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.747 05:22:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:08.747 ************************************ 00:32:08.747 START TEST skip_rpc_with_delay 00:32:08.747 ************************************ 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.747 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:08.748 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:32:08.748 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:08.748 [2024-12-09 05:22:55.661274] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:32:09.005 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:32:09.005 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.005 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.005 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.005 00:32:09.005 real 0m0.221s 00:32:09.005 user 0m0.119s 00:32:09.005 sys 0m0.099s 00:32:09.005 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.005 ************************************ 00:32:09.005 END TEST skip_rpc_with_delay 00:32:09.005 ************************************ 00:32:09.005 05:22:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:32:09.005 05:22:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:32:09.005 05:22:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:32:09.005 05:22:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:32:09.005 05:22:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:09.005 05:22:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.005 05:22:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:09.005 ************************************ 00:32:09.005 START TEST exit_on_failed_rpc_init 00:32:09.005 ************************************ 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57327 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57327 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57327 ']' 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.005 05:22:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:32:09.005 [2024-12-09 05:22:55.937582] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:09.006 [2024-12-09 05:22:55.938106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57327 ] 00:32:09.264 [2024-12-09 05:22:56.119901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.521 [2024-12-09 05:22:56.276632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.454 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.454 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:32:10.454 05:22:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:10.454 05:22:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:32:10.454 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:32:10.454 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:32:10.455 05:22:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:32:10.455 [2024-12-09 05:22:57.363263] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:10.455 [2024-12-09 05:22:57.363508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57352 ] 00:32:10.712 [2024-12-09 05:22:57.558176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.972 [2024-12-09 05:22:57.709511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.972 [2024-12-09 05:22:57.709652] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:10.972 [2024-12-09 05:22:57.709676] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:10.972 [2024-12-09 05:22:57.709694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57327 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57327 ']' 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57327 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57327 00:32:11.231 killing process with pid 57327 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57327' 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57327 00:32:11.231 05:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57327 00:32:13.807 ************************************ 00:32:13.807 END TEST exit_on_failed_rpc_init 00:32:13.807 ************************************ 00:32:13.807 00:32:13.807 real 0m4.752s 00:32:13.807 user 0m5.306s 00:32:13.807 sys 0m0.822s 00:32:13.807 05:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.807 05:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:32:13.807 05:23:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:32:13.807 00:32:13.807 real 0m24.065s 00:32:13.807 user 0m22.787s 00:32:13.807 sys 0m2.969s 00:32:13.807 05:23:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.807 ************************************ 00:32:13.807 END TEST skip_rpc 00:32:13.807 ************************************ 00:32:13.807 05:23:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.807 05:23:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:32:13.807 05:23:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:13.807 05:23:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.807 05:23:00 -- common/autotest_common.sh@10 -- # set +x 00:32:13.807 ************************************ 00:32:13.807 START TEST rpc_client 00:32:13.807 ************************************ 00:32:13.807 05:23:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:32:13.807 * Looking for test storage... 00:32:13.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:32:13.807 05:23:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:13.807 05:23:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:13.807 05:23:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.068 05:23:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.068 05:23:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:32:14.068 05:23:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.068 05:23:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.068 --rc genhtml_branch_coverage=1 00:32:14.068 --rc genhtml_function_coverage=1 00:32:14.068 --rc genhtml_legend=1 00:32:14.068 --rc geninfo_all_blocks=1 00:32:14.068 --rc geninfo_unexecuted_blocks=1 00:32:14.068 00:32:14.068 ' 00:32:14.068 05:23:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.068 --rc genhtml_branch_coverage=1 00:32:14.068 --rc genhtml_function_coverage=1 00:32:14.068 --rc genhtml_legend=1 00:32:14.069 --rc geninfo_all_blocks=1 00:32:14.069 --rc geninfo_unexecuted_blocks=1 00:32:14.069 00:32:14.069 ' 00:32:14.069 05:23:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.069 --rc genhtml_branch_coverage=1 00:32:14.069 --rc genhtml_function_coverage=1 00:32:14.069 --rc genhtml_legend=1 00:32:14.069 --rc geninfo_all_blocks=1 00:32:14.069 --rc geninfo_unexecuted_blocks=1 00:32:14.069 00:32:14.069 ' 00:32:14.069 05:23:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.069 --rc genhtml_branch_coverage=1 00:32:14.069 --rc genhtml_function_coverage=1 00:32:14.069 --rc genhtml_legend=1 00:32:14.069 --rc geninfo_all_blocks=1 00:32:14.069 --rc geninfo_unexecuted_blocks=1 00:32:14.069 00:32:14.069 ' 00:32:14.069 05:23:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:32:14.069 OK 00:32:14.069 05:23:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:32:14.069 00:32:14.069 real 0m0.266s 00:32:14.069 user 0m0.156s 00:32:14.069 sys 0m0.117s 00:32:14.069 ************************************ 00:32:14.069 END TEST rpc_client 00:32:14.069 ************************************ 00:32:14.069 05:23:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.069 05:23:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:32:14.069 05:23:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:32:14.069 05:23:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.069 05:23:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.069 05:23:00 -- common/autotest_common.sh@10 -- # set +x 00:32:14.069 ************************************ 00:32:14.069 START TEST json_config 00:32:14.069 ************************************ 00:32:14.069 05:23:00 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:32:14.069 05:23:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.069 05:23:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.069 05:23:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.331 05:23:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.331 05:23:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.331 05:23:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.331 05:23:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.331 05:23:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.331 05:23:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.331 05:23:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:32:14.331 05:23:01 json_config -- scripts/common.sh@345 -- # : 1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.331 05:23:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.331 05:23:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@353 -- # local d=1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.331 05:23:01 json_config -- scripts/common.sh@355 -- # echo 1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.331 05:23:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@353 -- # local d=2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.331 05:23:01 json_config -- scripts/common.sh@355 -- # echo 2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.331 05:23:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.331 05:23:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.331 05:23:01 json_config -- scripts/common.sh@368 -- # return 0 00:32:14.331 05:23:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.331 05:23:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.331 --rc genhtml_branch_coverage=1 00:32:14.331 --rc genhtml_function_coverage=1 00:32:14.331 --rc genhtml_legend=1 00:32:14.331 --rc geninfo_all_blocks=1 00:32:14.331 --rc geninfo_unexecuted_blocks=1 00:32:14.331 00:32:14.331 ' 00:32:14.331 05:23:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.331 --rc genhtml_branch_coverage=1 00:32:14.331 --rc genhtml_function_coverage=1 00:32:14.331 --rc genhtml_legend=1 00:32:14.331 --rc geninfo_all_blocks=1 00:32:14.331 --rc geninfo_unexecuted_blocks=1 00:32:14.331 00:32:14.331 ' 00:32:14.331 05:23:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.331 --rc genhtml_branch_coverage=1 00:32:14.331 --rc genhtml_function_coverage=1 00:32:14.331 --rc genhtml_legend=1 00:32:14.331 --rc geninfo_all_blocks=1 00:32:14.331 --rc geninfo_unexecuted_blocks=1 00:32:14.331 00:32:14.331 ' 00:32:14.331 05:23:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.331 --rc genhtml_branch_coverage=1 00:32:14.331 --rc genhtml_function_coverage=1 00:32:14.331 --rc genhtml_legend=1 00:32:14.331 --rc geninfo_all_blocks=1 00:32:14.331 --rc geninfo_unexecuted_blocks=1 00:32:14.331 00:32:14.331 ' 00:32:14.331 05:23:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b7a55eba-b4a9-45b1-b3eb-0a1190fde04b 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b7a55eba-b4a9-45b1-b3eb-0a1190fde04b 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:14.331 05:23:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.331 05:23:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.331 05:23:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.331 05:23:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.331 05:23:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.331 05:23:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.331 05:23:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.331 05:23:01 json_config -- paths/export.sh@5 -- # export PATH 00:32:14.331 05:23:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@51 -- # : 0 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.331 05:23:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.331 05:23:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:32:14.331 05:23:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:32:14.331 05:23:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:32:14.331 WARNING: No tests are enabled so not running JSON configuration tests 00:32:14.331 05:23:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:32:14.331 05:23:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:32:14.332 05:23:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:32:14.332 05:23:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:32:14.332 ************************************ 00:32:14.332 END TEST json_config 00:32:14.332 ************************************ 00:32:14.332 00:32:14.332 real 0m0.201s 00:32:14.332 user 0m0.128s 00:32:14.332 sys 0m0.069s 00:32:14.332 05:23:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.332 05:23:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:14.332 05:23:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:32:14.332 05:23:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.332 05:23:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.332 05:23:01 -- common/autotest_common.sh@10 -- # set +x 00:32:14.332 ************************************ 00:32:14.332 START TEST json_config_extra_key 00:32:14.332 ************************************ 00:32:14.332 05:23:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:32:14.332 05:23:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.332 05:23:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.332 05:23:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.589 --rc genhtml_branch_coverage=1 00:32:14.589 --rc genhtml_function_coverage=1 00:32:14.589 --rc genhtml_legend=1 00:32:14.589 --rc geninfo_all_blocks=1 00:32:14.589 --rc geninfo_unexecuted_blocks=1 00:32:14.589 00:32:14.589 ' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.589 --rc genhtml_branch_coverage=1 00:32:14.589 --rc genhtml_function_coverage=1 00:32:14.589 --rc genhtml_legend=1 00:32:14.589 --rc geninfo_all_blocks=1 00:32:14.589 --rc geninfo_unexecuted_blocks=1 00:32:14.589 00:32:14.589 ' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.589 --rc genhtml_branch_coverage=1 00:32:14.589 --rc genhtml_function_coverage=1 00:32:14.589 --rc genhtml_legend=1 00:32:14.589 --rc geninfo_all_blocks=1 00:32:14.589 --rc geninfo_unexecuted_blocks=1 00:32:14.589 00:32:14.589 ' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.589 --rc genhtml_branch_coverage=1 00:32:14.589 --rc genhtml_function_coverage=1 00:32:14.589 --rc genhtml_legend=1 00:32:14.589 --rc geninfo_all_blocks=1 00:32:14.589 --rc geninfo_unexecuted_blocks=1 00:32:14.589 00:32:14.589 ' 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b7a55eba-b4a9-45b1-b3eb-0a1190fde04b 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b7a55eba-b4a9-45b1-b3eb-0a1190fde04b 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.589 05:23:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.589 05:23:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.589 05:23:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.589 05:23:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.589 05:23:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:32:14.589 05:23:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.589 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.589 05:23:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:32:14.589 INFO: launching applications... 00:32:14.589 05:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:32:14.589 Waiting for target to run... 00:32:14.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57562 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57562 /var/tmp/spdk_tgt.sock 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57562 ']' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.589 05:23:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.589 05:23:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:32:14.846 [2024-12-09 05:23:01.566825] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:14.846 [2024-12-09 05:23:01.567033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57562 ] 00:32:15.411 [2024-12-09 05:23:02.143304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.411 [2024-12-09 05:23:02.266624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.977 05:23:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.977 00:32:15.977 INFO: shutting down applications... 00:32:15.977 05:23:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:32:15.977 05:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:32:15.977 05:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57562 ]] 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57562 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:15.977 05:23:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:16.544 05:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:16.544 05:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:16.544 05:23:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:16.544 05:23:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:17.108 05:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:17.108 05:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:17.108 05:23:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:17.108 05:23:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:17.673 05:23:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:17.673 05:23:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:17.673 05:23:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:17.673 05:23:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:18.239 05:23:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:18.239 05:23:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:18.239 05:23:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:18.239 05:23:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:18.497 05:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:18.497 05:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:18.497 05:23:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:18.497 05:23:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57562 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:32:19.062 SPDK target shutdown done 00:32:19.062 Success 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:32:19.062 05:23:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:32:19.062 05:23:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:32:19.062 ************************************ 00:32:19.062 END TEST json_config_extra_key 00:32:19.062 ************************************ 00:32:19.062 00:32:19.062 real 0m4.747s 00:32:19.062 user 0m4.258s 00:32:19.062 sys 0m0.748s 00:32:19.062 05:23:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.062 05:23:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:32:19.062 05:23:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:32:19.062 05:23:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:19.062 05:23:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.062 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:32:19.062 ************************************ 00:32:19.062 START TEST alias_rpc 00:32:19.062 ************************************ 00:32:19.062 05:23:06 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:32:19.321 * Looking for test storage... 00:32:19.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.321 05:23:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.321 --rc genhtml_branch_coverage=1 00:32:19.321 --rc genhtml_function_coverage=1 00:32:19.321 --rc genhtml_legend=1 00:32:19.321 --rc geninfo_all_blocks=1 00:32:19.321 --rc geninfo_unexecuted_blocks=1 00:32:19.321 00:32:19.321 ' 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.321 --rc genhtml_branch_coverage=1 00:32:19.321 --rc genhtml_function_coverage=1 00:32:19.321 --rc genhtml_legend=1 00:32:19.321 --rc geninfo_all_blocks=1 00:32:19.321 --rc geninfo_unexecuted_blocks=1 00:32:19.321 00:32:19.321 ' 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.321 --rc genhtml_branch_coverage=1 00:32:19.321 --rc genhtml_function_coverage=1 00:32:19.321 --rc genhtml_legend=1 00:32:19.321 --rc geninfo_all_blocks=1 00:32:19.321 --rc geninfo_unexecuted_blocks=1 00:32:19.321 00:32:19.321 ' 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.321 --rc genhtml_branch_coverage=1 00:32:19.321 --rc genhtml_function_coverage=1 00:32:19.321 --rc genhtml_legend=1 00:32:19.321 --rc geninfo_all_blocks=1 00:32:19.321 --rc geninfo_unexecuted_blocks=1 00:32:19.321 00:32:19.321 ' 00:32:19.321 05:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:32:19.321 05:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57672 00:32:19.321 05:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57672 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57672 ']' 00:32:19.321 05:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.321 05:23:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:19.580 [2024-12-09 05:23:06.342738] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:19.580 [2024-12-09 05:23:06.342949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57672 ] 00:32:19.580 [2024-12-09 05:23:06.541405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.839 [2024-12-09 05:23:06.722481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.775 05:23:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.775 05:23:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:32:20.775 05:23:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:32:21.341 05:23:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57672 00:32:21.341 05:23:08 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57672 ']' 00:32:21.341 05:23:08 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57672 00:32:21.341 05:23:08 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:32:21.341 05:23:08 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:21.341 05:23:08 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57672 00:32:21.341 05:23:08 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:21.342 killing process with pid 57672 00:32:21.342 05:23:08 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:21.342 05:23:08 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57672' 00:32:21.342 05:23:08 alias_rpc -- common/autotest_common.sh@973 -- # kill 57672 00:32:21.342 05:23:08 alias_rpc -- common/autotest_common.sh@978 -- # wait 57672 00:32:23.872 00:32:23.872 real 0m4.452s 00:32:23.872 user 0m4.535s 00:32:23.872 sys 0m0.765s 00:32:23.872 ************************************ 00:32:23.872 END TEST alias_rpc 00:32:23.873 ************************************ 00:32:23.873 05:23:10 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.873 05:23:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.873 05:23:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:32:23.873 05:23:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:32:23.873 05:23:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:23.873 05:23:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.873 05:23:10 -- common/autotest_common.sh@10 -- # set +x 00:32:23.873 ************************************ 00:32:23.873 START TEST spdkcli_tcp 00:32:23.873 ************************************ 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:32:23.873 * Looking for test storage... 00:32:23.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.873 05:23:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:23.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.873 --rc genhtml_branch_coverage=1 00:32:23.873 --rc genhtml_function_coverage=1 00:32:23.873 --rc genhtml_legend=1 00:32:23.873 --rc geninfo_all_blocks=1 00:32:23.873 --rc geninfo_unexecuted_blocks=1 00:32:23.873 00:32:23.873 ' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:23.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.873 --rc genhtml_branch_coverage=1 00:32:23.873 --rc genhtml_function_coverage=1 00:32:23.873 --rc genhtml_legend=1 00:32:23.873 --rc geninfo_all_blocks=1 00:32:23.873 --rc geninfo_unexecuted_blocks=1 00:32:23.873 00:32:23.873 ' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:23.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.873 --rc genhtml_branch_coverage=1 00:32:23.873 --rc genhtml_function_coverage=1 00:32:23.873 --rc genhtml_legend=1 00:32:23.873 --rc geninfo_all_blocks=1 00:32:23.873 --rc geninfo_unexecuted_blocks=1 00:32:23.873 00:32:23.873 ' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:23.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.873 --rc genhtml_branch_coverage=1 00:32:23.873 --rc genhtml_function_coverage=1 00:32:23.873 --rc genhtml_legend=1 00:32:23.873 --rc geninfo_all_blocks=1 00:32:23.873 --rc geninfo_unexecuted_blocks=1 00:32:23.873 00:32:23.873 ' 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57786 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57786 00:32:23.873 05:23:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57786 ']' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.873 05:23:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.130 [2024-12-09 05:23:10.848437] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:24.130 [2024-12-09 05:23:10.849162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57786 ] 00:32:24.130 [2024-12-09 05:23:11.040890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:24.387 [2024-12-09 05:23:11.185280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.387 [2024-12-09 05:23:11.185286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.440 05:23:12 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.440 05:23:12 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:25.440 05:23:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57803 00:32:25.440 05:23:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:32:25.440 05:23:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:32:25.440 [ 00:32:25.440 "bdev_malloc_delete", 00:32:25.440 "bdev_malloc_create", 00:32:25.440 "bdev_null_resize", 00:32:25.440 "bdev_null_delete", 00:32:25.440 "bdev_null_create", 00:32:25.440 "bdev_nvme_cuse_unregister", 00:32:25.440 "bdev_nvme_cuse_register", 00:32:25.440 "bdev_opal_new_user", 00:32:25.440 "bdev_opal_set_lock_state", 00:32:25.440 "bdev_opal_delete", 00:32:25.440 "bdev_opal_get_info", 00:32:25.440 "bdev_opal_create", 00:32:25.440 "bdev_nvme_opal_revert", 00:32:25.440 "bdev_nvme_opal_init", 00:32:25.440 "bdev_nvme_send_cmd", 00:32:25.440 "bdev_nvme_set_keys", 00:32:25.440 "bdev_nvme_get_path_iostat", 00:32:25.440 "bdev_nvme_get_mdns_discovery_info", 00:32:25.440 "bdev_nvme_stop_mdns_discovery", 00:32:25.440 "bdev_nvme_start_mdns_discovery", 00:32:25.440 "bdev_nvme_set_multipath_policy", 00:32:25.440 "bdev_nvme_set_preferred_path", 00:32:25.440 "bdev_nvme_get_io_paths", 00:32:25.440 "bdev_nvme_remove_error_injection", 00:32:25.440 "bdev_nvme_add_error_injection", 00:32:25.440 "bdev_nvme_get_discovery_info", 00:32:25.440 "bdev_nvme_stop_discovery", 00:32:25.440 "bdev_nvme_start_discovery", 00:32:25.440 "bdev_nvme_get_controller_health_info", 00:32:25.440 "bdev_nvme_disable_controller", 00:32:25.440 "bdev_nvme_enable_controller", 00:32:25.440 "bdev_nvme_reset_controller", 00:32:25.440 "bdev_nvme_get_transport_statistics", 00:32:25.440 "bdev_nvme_apply_firmware", 00:32:25.440 "bdev_nvme_detach_controller", 00:32:25.440 "bdev_nvme_get_controllers", 00:32:25.440 "bdev_nvme_attach_controller", 00:32:25.440 "bdev_nvme_set_hotplug", 00:32:25.440 "bdev_nvme_set_options", 00:32:25.440 "bdev_passthru_delete", 00:32:25.440 "bdev_passthru_create", 00:32:25.440 "bdev_lvol_set_parent_bdev", 00:32:25.440 "bdev_lvol_set_parent", 00:32:25.440 "bdev_lvol_check_shallow_copy", 00:32:25.440 "bdev_lvol_start_shallow_copy", 00:32:25.440 "bdev_lvol_grow_lvstore", 00:32:25.440 "bdev_lvol_get_lvols", 00:32:25.440 "bdev_lvol_get_lvstores", 00:32:25.440 "bdev_lvol_delete", 00:32:25.440 "bdev_lvol_set_read_only", 00:32:25.440 "bdev_lvol_resize", 00:32:25.440 "bdev_lvol_decouple_parent", 00:32:25.440 "bdev_lvol_inflate", 00:32:25.440 "bdev_lvol_rename", 00:32:25.440 "bdev_lvol_clone_bdev", 00:32:25.440 "bdev_lvol_clone", 00:32:25.440 "bdev_lvol_snapshot", 00:32:25.440 "bdev_lvol_create", 00:32:25.440 "bdev_lvol_delete_lvstore", 00:32:25.440 "bdev_lvol_rename_lvstore", 00:32:25.440 "bdev_lvol_create_lvstore", 00:32:25.440 "bdev_raid_set_options", 00:32:25.440 "bdev_raid_remove_base_bdev", 00:32:25.440 "bdev_raid_add_base_bdev", 00:32:25.440 "bdev_raid_delete", 00:32:25.440 "bdev_raid_create", 00:32:25.440 "bdev_raid_get_bdevs", 00:32:25.440 "bdev_error_inject_error", 00:32:25.440 "bdev_error_delete", 00:32:25.440 "bdev_error_create", 00:32:25.440 "bdev_split_delete", 00:32:25.440 "bdev_split_create", 00:32:25.440 "bdev_delay_delete", 00:32:25.440 "bdev_delay_create", 00:32:25.440 "bdev_delay_update_latency", 00:32:25.440 "bdev_zone_block_delete", 00:32:25.440 "bdev_zone_block_create", 00:32:25.440 "blobfs_create", 00:32:25.440 "blobfs_detect", 00:32:25.440 "blobfs_set_cache_size", 00:32:25.440 "bdev_aio_delete", 00:32:25.440 "bdev_aio_rescan", 00:32:25.440 "bdev_aio_create", 00:32:25.440 "bdev_ftl_set_property", 00:32:25.440 "bdev_ftl_get_properties", 00:32:25.440 "bdev_ftl_get_stats", 00:32:25.440 "bdev_ftl_unmap", 00:32:25.440 "bdev_ftl_unload", 00:32:25.440 "bdev_ftl_delete", 00:32:25.440 "bdev_ftl_load", 00:32:25.440 "bdev_ftl_create", 00:32:25.440 "bdev_virtio_attach_controller", 00:32:25.440 "bdev_virtio_scsi_get_devices", 00:32:25.440 "bdev_virtio_detach_controller", 00:32:25.440 "bdev_virtio_blk_set_hotplug", 00:32:25.440 "bdev_iscsi_delete", 00:32:25.440 "bdev_iscsi_create", 00:32:25.440 "bdev_iscsi_set_options", 00:32:25.440 "accel_error_inject_error", 00:32:25.440 "ioat_scan_accel_module", 00:32:25.440 "dsa_scan_accel_module", 00:32:25.440 "iaa_scan_accel_module", 00:32:25.440 "keyring_file_remove_key", 00:32:25.440 "keyring_file_add_key", 00:32:25.440 "keyring_linux_set_options", 00:32:25.440 "fsdev_aio_delete", 00:32:25.440 "fsdev_aio_create", 00:32:25.440 "iscsi_get_histogram", 00:32:25.440 "iscsi_enable_histogram", 00:32:25.441 "iscsi_set_options", 00:32:25.441 "iscsi_get_auth_groups", 00:32:25.441 "iscsi_auth_group_remove_secret", 00:32:25.441 "iscsi_auth_group_add_secret", 00:32:25.441 "iscsi_delete_auth_group", 00:32:25.441 "iscsi_create_auth_group", 00:32:25.441 "iscsi_set_discovery_auth", 00:32:25.441 "iscsi_get_options", 00:32:25.441 "iscsi_target_node_request_logout", 00:32:25.441 "iscsi_target_node_set_redirect", 00:32:25.441 "iscsi_target_node_set_auth", 00:32:25.441 "iscsi_target_node_add_lun", 00:32:25.441 "iscsi_get_stats", 00:32:25.441 "iscsi_get_connections", 00:32:25.441 "iscsi_portal_group_set_auth", 00:32:25.441 "iscsi_start_portal_group", 00:32:25.441 "iscsi_delete_portal_group", 00:32:25.441 "iscsi_create_portal_group", 00:32:25.441 "iscsi_get_portal_groups", 00:32:25.441 "iscsi_delete_target_node", 00:32:25.441 "iscsi_target_node_remove_pg_ig_maps", 00:32:25.441 "iscsi_target_node_add_pg_ig_maps", 00:32:25.441 "iscsi_create_target_node", 00:32:25.441 "iscsi_get_target_nodes", 00:32:25.441 "iscsi_delete_initiator_group", 00:32:25.441 "iscsi_initiator_group_remove_initiators", 00:32:25.441 "iscsi_initiator_group_add_initiators", 00:32:25.441 "iscsi_create_initiator_group", 00:32:25.441 "iscsi_get_initiator_groups", 00:32:25.441 "nvmf_set_crdt", 00:32:25.441 "nvmf_set_config", 00:32:25.441 "nvmf_set_max_subsystems", 00:32:25.441 "nvmf_stop_mdns_prr", 00:32:25.441 "nvmf_publish_mdns_prr", 00:32:25.441 "nvmf_subsystem_get_listeners", 00:32:25.441 "nvmf_subsystem_get_qpairs", 00:32:25.441 "nvmf_subsystem_get_controllers", 00:32:25.441 "nvmf_get_stats", 00:32:25.441 "nvmf_get_transports", 00:32:25.441 "nvmf_create_transport", 00:32:25.441 "nvmf_get_targets", 00:32:25.441 "nvmf_delete_target", 00:32:25.441 "nvmf_create_target", 00:32:25.441 "nvmf_subsystem_allow_any_host", 00:32:25.441 "nvmf_subsystem_set_keys", 00:32:25.441 "nvmf_subsystem_remove_host", 00:32:25.441 "nvmf_subsystem_add_host", 00:32:25.441 "nvmf_ns_remove_host", 00:32:25.441 "nvmf_ns_add_host", 00:32:25.441 "nvmf_subsystem_remove_ns", 00:32:25.441 "nvmf_subsystem_set_ns_ana_group", 00:32:25.441 "nvmf_subsystem_add_ns", 00:32:25.441 "nvmf_subsystem_listener_set_ana_state", 00:32:25.441 "nvmf_discovery_get_referrals", 00:32:25.441 "nvmf_discovery_remove_referral", 00:32:25.441 "nvmf_discovery_add_referral", 00:32:25.441 "nvmf_subsystem_remove_listener", 00:32:25.441 "nvmf_subsystem_add_listener", 00:32:25.441 "nvmf_delete_subsystem", 00:32:25.441 "nvmf_create_subsystem", 00:32:25.441 "nvmf_get_subsystems", 00:32:25.441 "env_dpdk_get_mem_stats", 00:32:25.441 "nbd_get_disks", 00:32:25.441 "nbd_stop_disk", 00:32:25.441 "nbd_start_disk", 00:32:25.441 "ublk_recover_disk", 00:32:25.441 "ublk_get_disks", 00:32:25.441 "ublk_stop_disk", 00:32:25.441 "ublk_start_disk", 00:32:25.441 "ublk_destroy_target", 00:32:25.441 "ublk_create_target", 00:32:25.441 "virtio_blk_create_transport", 00:32:25.441 "virtio_blk_get_transports", 00:32:25.441 "vhost_controller_set_coalescing", 00:32:25.441 "vhost_get_controllers", 00:32:25.441 "vhost_delete_controller", 00:32:25.441 "vhost_create_blk_controller", 00:32:25.441 "vhost_scsi_controller_remove_target", 00:32:25.441 "vhost_scsi_controller_add_target", 00:32:25.441 "vhost_start_scsi_controller", 00:32:25.441 "vhost_create_scsi_controller", 00:32:25.441 "thread_set_cpumask", 00:32:25.441 "scheduler_set_options", 00:32:25.441 "framework_get_governor", 00:32:25.441 "framework_get_scheduler", 00:32:25.441 "framework_set_scheduler", 00:32:25.441 "framework_get_reactors", 00:32:25.441 "thread_get_io_channels", 00:32:25.441 "thread_get_pollers", 00:32:25.441 "thread_get_stats", 00:32:25.441 "framework_monitor_context_switch", 00:32:25.441 "spdk_kill_instance", 00:32:25.441 "log_enable_timestamps", 00:32:25.441 "log_get_flags", 00:32:25.441 "log_clear_flag", 00:32:25.441 "log_set_flag", 00:32:25.441 "log_get_level", 00:32:25.441 "log_set_level", 00:32:25.441 "log_get_print_level", 00:32:25.441 "log_set_print_level", 00:32:25.441 "framework_enable_cpumask_locks", 00:32:25.441 "framework_disable_cpumask_locks", 00:32:25.441 "framework_wait_init", 00:32:25.441 "framework_start_init", 00:32:25.441 "scsi_get_devices", 00:32:25.441 "bdev_get_histogram", 00:32:25.441 "bdev_enable_histogram", 00:32:25.441 "bdev_set_qos_limit", 00:32:25.441 "bdev_set_qd_sampling_period", 00:32:25.441 "bdev_get_bdevs", 00:32:25.441 "bdev_reset_iostat", 00:32:25.441 "bdev_get_iostat", 00:32:25.441 "bdev_examine", 00:32:25.441 "bdev_wait_for_examine", 00:32:25.441 "bdev_set_options", 00:32:25.441 "accel_get_stats", 00:32:25.441 "accel_set_options", 00:32:25.441 "accel_set_driver", 00:32:25.441 "accel_crypto_key_destroy", 00:32:25.441 "accel_crypto_keys_get", 00:32:25.441 "accel_crypto_key_create", 00:32:25.441 "accel_assign_opc", 00:32:25.441 "accel_get_module_info", 00:32:25.441 "accel_get_opc_assignments", 00:32:25.441 "vmd_rescan", 00:32:25.441 "vmd_remove_device", 00:32:25.441 "vmd_enable", 00:32:25.441 "sock_get_default_impl", 00:32:25.441 "sock_set_default_impl", 00:32:25.441 "sock_impl_set_options", 00:32:25.441 "sock_impl_get_options", 00:32:25.441 "iobuf_get_stats", 00:32:25.441 "iobuf_set_options", 00:32:25.441 "keyring_get_keys", 00:32:25.441 "framework_get_pci_devices", 00:32:25.441 "framework_get_config", 00:32:25.441 "framework_get_subsystems", 00:32:25.441 "fsdev_set_opts", 00:32:25.441 "fsdev_get_opts", 00:32:25.441 "trace_get_info", 00:32:25.441 "trace_get_tpoint_group_mask", 00:32:25.441 "trace_disable_tpoint_group", 00:32:25.441 "trace_enable_tpoint_group", 00:32:25.441 "trace_clear_tpoint_mask", 00:32:25.441 "trace_set_tpoint_mask", 00:32:25.441 "notify_get_notifications", 00:32:25.441 "notify_get_types", 00:32:25.441 "spdk_get_version", 00:32:25.441 "rpc_get_methods" 00:32:25.441 ] 00:32:25.698 05:23:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.698 05:23:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:25.698 05:23:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57786 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57786 ']' 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57786 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57786 00:32:25.698 killing process with pid 57786 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.698 05:23:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.699 05:23:12 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57786' 00:32:25.699 05:23:12 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57786 00:32:25.699 05:23:12 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57786 00:32:28.229 00:32:28.229 real 0m4.358s 00:32:28.229 user 0m7.681s 00:32:28.229 sys 0m0.817s 00:32:28.229 ************************************ 00:32:28.229 END TEST spdkcli_tcp 00:32:28.229 ************************************ 00:32:28.229 05:23:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.229 05:23:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.229 05:23:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:32:28.229 05:23:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:28.229 05:23:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.229 05:23:14 -- common/autotest_common.sh@10 -- # set +x 00:32:28.229 ************************************ 00:32:28.229 START TEST dpdk_mem_utility 00:32:28.229 ************************************ 00:32:28.229 05:23:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:32:28.229 * Looking for test storage... 00:32:28.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.229 05:23:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:28.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.229 --rc genhtml_branch_coverage=1 00:32:28.229 --rc genhtml_function_coverage=1 00:32:28.229 --rc genhtml_legend=1 00:32:28.229 --rc geninfo_all_blocks=1 00:32:28.229 --rc geninfo_unexecuted_blocks=1 00:32:28.229 00:32:28.229 ' 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:28.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.229 --rc genhtml_branch_coverage=1 00:32:28.229 --rc genhtml_function_coverage=1 00:32:28.229 --rc genhtml_legend=1 00:32:28.229 --rc geninfo_all_blocks=1 00:32:28.229 --rc geninfo_unexecuted_blocks=1 00:32:28.229 00:32:28.229 ' 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:28.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.229 --rc genhtml_branch_coverage=1 00:32:28.229 --rc genhtml_function_coverage=1 00:32:28.229 --rc genhtml_legend=1 00:32:28.229 --rc geninfo_all_blocks=1 00:32:28.229 --rc geninfo_unexecuted_blocks=1 00:32:28.229 00:32:28.229 ' 00:32:28.229 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:28.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.229 --rc genhtml_branch_coverage=1 00:32:28.229 --rc genhtml_function_coverage=1 00:32:28.230 --rc genhtml_legend=1 00:32:28.230 --rc geninfo_all_blocks=1 00:32:28.230 --rc geninfo_unexecuted_blocks=1 00:32:28.230 00:32:28.230 ' 00:32:28.230 05:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:32:28.230 05:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:28.230 05:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57908 00:32:28.230 05:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57908 00:32:28.230 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57908 ']' 00:32:28.230 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.230 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.230 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.230 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.230 05:23:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:32:28.488 [2024-12-09 05:23:15.278206] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:28.488 [2024-12-09 05:23:15.278682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57908 ] 00:32:28.745 [2024-12-09 05:23:15.486318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.745 [2024-12-09 05:23:15.673093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.679 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.679 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:32:29.679 05:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:32:29.679 05:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:32:29.679 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.679 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:32:29.939 { 00:32:29.939 "filename": "/tmp/spdk_mem_dump.txt" 00:32:29.939 } 00:32:29.939 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.939 05:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:32:29.939 DPDK memory size 824.000000 MiB in 1 heap(s) 00:32:29.939 1 heaps totaling size 824.000000 MiB 00:32:29.939 size: 824.000000 MiB heap id: 0 00:32:29.939 end heaps---------- 00:32:29.939 9 mempools totaling size 603.782043 MiB 00:32:29.939 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:32:29.939 size: 158.602051 MiB name: PDU_data_out_Pool 00:32:29.939 size: 100.555481 MiB name: bdev_io_57908 00:32:29.939 size: 50.003479 MiB name: msgpool_57908 00:32:29.939 size: 36.509338 MiB name: fsdev_io_57908 00:32:29.939 size: 21.763794 MiB name: PDU_Pool 00:32:29.939 size: 19.513306 MiB name: SCSI_TASK_Pool 00:32:29.939 size: 4.133484 MiB name: evtpool_57908 00:32:29.939 size: 0.026123 MiB name: Session_Pool 00:32:29.939 end mempools------- 00:32:29.939 6 memzones totaling size 4.142822 MiB 00:32:29.939 size: 1.000366 MiB name: RG_ring_0_57908 00:32:29.939 size: 1.000366 MiB name: RG_ring_1_57908 00:32:29.939 size: 1.000366 MiB name: RG_ring_4_57908 00:32:29.939 size: 1.000366 MiB name: RG_ring_5_57908 00:32:29.939 size: 0.125366 MiB name: RG_ring_2_57908 00:32:29.939 size: 0.015991 MiB name: RG_ring_3_57908 00:32:29.939 end memzones------- 00:32:29.939 05:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:32:29.939 heap id: 0 total size: 824.000000 MiB number of busy elements: 318 number of free elements: 18 00:32:29.939 list of free elements. size: 16.780640 MiB 00:32:29.940 element at address: 0x200006400000 with size: 1.995972 MiB 00:32:29.940 element at address: 0x20000a600000 with size: 1.995972 MiB 00:32:29.940 element at address: 0x200003e00000 with size: 1.991028 MiB 00:32:29.940 element at address: 0x200019500040 with size: 0.999939 MiB 00:32:29.940 element at address: 0x200019900040 with size: 0.999939 MiB 00:32:29.940 element at address: 0x200019a00000 with size: 0.999084 MiB 00:32:29.940 element at address: 0x200032600000 with size: 0.994324 MiB 00:32:29.940 element at address: 0x200000400000 with size: 0.992004 MiB 00:32:29.940 element at address: 0x200019200000 with size: 0.959656 MiB 00:32:29.940 element at address: 0x200019d00040 with size: 0.936401 MiB 00:32:29.940 element at address: 0x200000200000 with size: 0.716980 MiB 00:32:29.940 element at address: 0x20001b400000 with size: 0.561951 MiB 00:32:29.940 element at address: 0x200000c00000 with size: 0.489197 MiB 00:32:29.940 element at address: 0x200019600000 with size: 0.487976 MiB 00:32:29.940 element at address: 0x200019e00000 with size: 0.485413 MiB 00:32:29.940 element at address: 0x200012c00000 with size: 0.433472 MiB 00:32:29.940 element at address: 0x200028800000 with size: 0.390442 MiB 00:32:29.940 element at address: 0x200000800000 with size: 0.350891 MiB 00:32:29.940 list of standard malloc elements. size: 199.288452 MiB 00:32:29.940 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:32:29.940 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:32:29.940 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:32:29.940 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:32:29.940 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:32:29.940 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:32:29.940 element at address: 0x200019deff40 with size: 0.062683 MiB 00:32:29.940 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:32:29.940 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:32:29.940 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:32:29.940 element at address: 0x200012bff040 with size: 0.000305 MiB 00:32:29.940 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:32:29.940 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200000cff000 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff180 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff280 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff380 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff480 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff580 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff680 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff780 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff880 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bff980 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:32:29.940 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200019affc40 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200028863f40 with size: 0.000244 MiB 00:32:29.941 element at address: 0x200028864040 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886af80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b080 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b180 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b280 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b380 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b480 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b580 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b680 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b780 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b880 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886b980 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886be80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c080 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c180 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c280 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c380 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c480 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c580 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c680 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c780 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c880 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886c980 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d080 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d180 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d280 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d380 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d480 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d580 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d680 with size: 0.000244 MiB 00:32:29.941 element at address: 0x20002886d780 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886d880 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886d980 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886da80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886db80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886de80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886df80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e080 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e180 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e280 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e380 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e480 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e580 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e680 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e780 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e880 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886e980 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f080 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f180 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f280 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f380 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f480 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f580 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f680 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f780 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f880 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886f980 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:32:29.942 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:32:29.942 list of memzone associated elements. size: 607.930908 MiB 00:32:29.942 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:32:29.942 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:32:29.942 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:32:29.942 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:32:29.942 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:32:29.942 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57908_0 00:32:29.942 element at address: 0x200000dff340 with size: 48.003113 MiB 00:32:29.942 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57908_0 00:32:29.942 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:32:29.942 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57908_0 00:32:29.942 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:32:29.942 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:32:29.942 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:32:29.942 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:32:29.942 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:32:29.942 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57908_0 00:32:29.942 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:32:29.942 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57908 00:32:29.942 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:32:29.942 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57908 00:32:29.942 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:32:29.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:32:29.942 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:32:29.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:32:29.942 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:32:29.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:32:29.942 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:32:29.942 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:32:29.942 element at address: 0x200000cff100 with size: 1.000549 MiB 00:32:29.942 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57908 00:32:29.942 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:32:29.942 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57908 00:32:29.942 element at address: 0x200019affd40 with size: 1.000549 MiB 00:32:29.942 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57908 00:32:29.942 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:32:29.942 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57908 00:32:29.942 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:32:29.942 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57908 00:32:29.942 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:32:29.942 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57908 00:32:29.942 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:32:29.942 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:32:29.942 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:32:29.942 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:32:29.942 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:32:29.942 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:32:29.942 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:32:29.942 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57908 00:32:29.942 element at address: 0x20000085df80 with size: 0.125549 MiB 00:32:29.942 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57908 00:32:29.942 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:32:29.942 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:32:29.942 element at address: 0x200028864140 with size: 0.023804 MiB 00:32:29.942 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:32:29.942 element at address: 0x200000859d40 with size: 0.016174 MiB 00:32:29.942 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57908 00:32:29.942 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:32:29.942 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:32:29.942 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:32:29.942 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57908 00:32:29.942 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:32:29.942 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57908 00:32:29.942 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:32:29.942 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57908 00:32:29.942 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:32:29.942 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:32:29.942 05:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:32:29.942 05:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57908 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57908 ']' 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57908 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57908 00:32:29.942 killing process with pid 57908 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57908' 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57908 00:32:29.942 05:23:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57908 00:32:32.471 ************************************ 00:32:32.471 END TEST dpdk_mem_utility 00:32:32.471 ************************************ 00:32:32.471 00:32:32.471 real 0m4.255s 00:32:32.471 user 0m4.170s 00:32:32.471 sys 0m0.787s 00:32:32.471 05:23:19 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.471 05:23:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:32:32.471 05:23:19 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:32:32.471 05:23:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:32.471 05:23:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.471 05:23:19 -- common/autotest_common.sh@10 -- # set +x 00:32:32.471 ************************************ 00:32:32.471 START TEST event 00:32:32.471 ************************************ 00:32:32.471 05:23:19 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:32:32.471 * Looking for test storage... 00:32:32.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:32:32.471 05:23:19 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:32.471 05:23:19 event -- common/autotest_common.sh@1693 -- # lcov --version 00:32:32.471 05:23:19 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:32.471 05:23:19 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:32.471 05:23:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.471 05:23:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.471 05:23:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.472 05:23:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.472 05:23:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.472 05:23:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.472 05:23:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.472 05:23:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.472 05:23:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.472 05:23:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.472 05:23:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.472 05:23:19 event -- scripts/common.sh@344 -- # case "$op" in 00:32:32.472 05:23:19 event -- scripts/common.sh@345 -- # : 1 00:32:32.472 05:23:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.472 05:23:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.472 05:23:19 event -- scripts/common.sh@365 -- # decimal 1 00:32:32.472 05:23:19 event -- scripts/common.sh@353 -- # local d=1 00:32:32.472 05:23:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.472 05:23:19 event -- scripts/common.sh@355 -- # echo 1 00:32:32.472 05:23:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.472 05:23:19 event -- scripts/common.sh@366 -- # decimal 2 00:32:32.472 05:23:19 event -- scripts/common.sh@353 -- # local d=2 00:32:32.472 05:23:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.472 05:23:19 event -- scripts/common.sh@355 -- # echo 2 00:32:32.472 05:23:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.472 05:23:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.472 05:23:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.472 05:23:19 event -- scripts/common.sh@368 -- # return 0 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:32.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.472 --rc genhtml_branch_coverage=1 00:32:32.472 --rc genhtml_function_coverage=1 00:32:32.472 --rc genhtml_legend=1 00:32:32.472 --rc geninfo_all_blocks=1 00:32:32.472 --rc geninfo_unexecuted_blocks=1 00:32:32.472 00:32:32.472 ' 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:32.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.472 --rc genhtml_branch_coverage=1 00:32:32.472 --rc genhtml_function_coverage=1 00:32:32.472 --rc genhtml_legend=1 00:32:32.472 --rc geninfo_all_blocks=1 00:32:32.472 --rc geninfo_unexecuted_blocks=1 00:32:32.472 00:32:32.472 ' 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:32.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.472 --rc genhtml_branch_coverage=1 00:32:32.472 --rc genhtml_function_coverage=1 00:32:32.472 --rc genhtml_legend=1 00:32:32.472 --rc geninfo_all_blocks=1 00:32:32.472 --rc geninfo_unexecuted_blocks=1 00:32:32.472 00:32:32.472 ' 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:32.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.472 --rc genhtml_branch_coverage=1 00:32:32.472 --rc genhtml_function_coverage=1 00:32:32.472 --rc genhtml_legend=1 00:32:32.472 --rc geninfo_all_blocks=1 00:32:32.472 --rc geninfo_unexecuted_blocks=1 00:32:32.472 00:32:32.472 ' 00:32:32.472 05:23:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:32:32.472 05:23:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:32:32.472 05:23:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:32.472 05:23:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.472 05:23:19 event -- common/autotest_common.sh@10 -- # set +x 00:32:32.472 ************************************ 00:32:32.472 START TEST event_perf 00:32:32.472 ************************************ 00:32:32.472 05:23:19 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:32:32.730 Running I/O for 1 seconds...[2024-12-09 05:23:19.481732] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:32.730 [2024-12-09 05:23:19.482275] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58016 ] 00:32:32.730 [2024-12-09 05:23:19.671697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.987 [2024-12-09 05:23:19.817667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.987 [2024-12-09 05:23:19.817853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.987 [2024-12-09 05:23:19.818422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.987 [2024-12-09 05:23:19.818440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.361 Running I/O for 1 seconds... 00:32:34.361 lcore 0: 201051 00:32:34.361 lcore 1: 201050 00:32:34.361 lcore 2: 201050 00:32:34.361 lcore 3: 201052 00:32:34.361 done. 00:32:34.361 00:32:34.361 real 0m1.715s 00:32:34.361 user 0m4.455s 00:32:34.361 sys 0m0.134s 00:32:34.361 05:23:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.361 05:23:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:32:34.361 ************************************ 00:32:34.361 END TEST event_perf 00:32:34.361 ************************************ 00:32:34.361 05:23:21 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:32:34.361 05:23:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:34.361 05:23:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.361 05:23:21 event -- common/autotest_common.sh@10 -- # set +x 00:32:34.361 ************************************ 00:32:34.361 START TEST event_reactor 00:32:34.361 ************************************ 00:32:34.361 05:23:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:32:34.361 [2024-12-09 05:23:21.261198] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:34.361 [2024-12-09 05:23:21.261526] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58061 ] 00:32:34.620 [2024-12-09 05:23:21.449522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.620 [2024-12-09 05:23:21.585728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.994 test_start 00:32:35.994 oneshot 00:32:35.994 tick 100 00:32:35.994 tick 100 00:32:35.994 tick 250 00:32:35.994 tick 100 00:32:35.994 tick 100 00:32:35.994 tick 250 00:32:35.994 tick 500 00:32:35.994 tick 100 00:32:35.994 tick 100 00:32:35.994 tick 100 00:32:35.994 tick 250 00:32:35.994 tick 100 00:32:35.994 tick 100 00:32:35.994 test_end 00:32:35.994 00:32:35.994 real 0m1.687s 00:32:35.994 user 0m1.451s 00:32:35.994 sys 0m0.125s 00:32:35.994 05:23:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.994 ************************************ 00:32:35.994 END TEST event_reactor 00:32:35.995 ************************************ 00:32:35.995 05:23:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:32:35.995 05:23:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:32:35.995 05:23:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.995 05:23:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.995 05:23:22 event -- common/autotest_common.sh@10 -- # set +x 00:32:35.995 ************************************ 00:32:35.995 START TEST event_reactor_perf 00:32:35.995 ************************************ 00:32:35.995 05:23:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:32:36.254 [2024-12-09 05:23:22.999462] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:36.254 [2024-12-09 05:23:22.999634] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58103 ] 00:32:36.254 [2024-12-09 05:23:23.191124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.521 [2024-12-09 05:23:23.339264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.910 test_start 00:32:37.910 test_end 00:32:37.910 Performance: 291032 events per second 00:32:37.910 ************************************ 00:32:37.910 END TEST event_reactor_perf 00:32:37.910 ************************************ 00:32:37.910 00:32:37.910 real 0m1.725s 00:32:37.910 user 0m1.497s 00:32:37.910 sys 0m0.118s 00:32:37.910 05:23:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.910 05:23:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:32:37.910 05:23:24 event -- event/event.sh@49 -- # uname -s 00:32:37.910 05:23:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:32:37.910 05:23:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:32:37.910 05:23:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:37.910 05:23:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.910 05:23:24 event -- common/autotest_common.sh@10 -- # set +x 00:32:37.910 ************************************ 00:32:37.910 START TEST event_scheduler 00:32:37.910 ************************************ 00:32:37.910 05:23:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:32:37.910 * Looking for test storage... 00:32:37.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:32:37.910 05:23:24 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:37.910 05:23:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:32:37.910 05:23:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.170 05:23:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:38.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.170 --rc genhtml_branch_coverage=1 00:32:38.170 --rc genhtml_function_coverage=1 00:32:38.170 --rc genhtml_legend=1 00:32:38.170 --rc geninfo_all_blocks=1 00:32:38.170 --rc geninfo_unexecuted_blocks=1 00:32:38.170 00:32:38.170 ' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:38.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.170 --rc genhtml_branch_coverage=1 00:32:38.170 --rc genhtml_function_coverage=1 00:32:38.170 --rc genhtml_legend=1 00:32:38.170 --rc geninfo_all_blocks=1 00:32:38.170 --rc geninfo_unexecuted_blocks=1 00:32:38.170 00:32:38.170 ' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:38.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.170 --rc genhtml_branch_coverage=1 00:32:38.170 --rc genhtml_function_coverage=1 00:32:38.170 --rc genhtml_legend=1 00:32:38.170 --rc geninfo_all_blocks=1 00:32:38.170 --rc geninfo_unexecuted_blocks=1 00:32:38.170 00:32:38.170 ' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:38.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.170 --rc genhtml_branch_coverage=1 00:32:38.170 --rc genhtml_function_coverage=1 00:32:38.170 --rc genhtml_legend=1 00:32:38.170 --rc geninfo_all_blocks=1 00:32:38.170 --rc geninfo_unexecuted_blocks=1 00:32:38.170 00:32:38.170 ' 00:32:38.170 05:23:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:32:38.170 05:23:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58179 00:32:38.170 05:23:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:32:38.170 05:23:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58179 00:32:38.170 05:23:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58179 ']' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.170 05:23:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:38.170 [2024-12-09 05:23:25.087794] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:38.170 [2024-12-09 05:23:25.088031] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58179 ] 00:32:38.428 [2024-12-09 05:23:25.288533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:38.686 [2024-12-09 05:23:25.476628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.686 [2024-12-09 05:23:25.476839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.686 [2024-12-09 05:23:25.478092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.686 [2024-12-09 05:23:25.478105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:32:39.252 05:23:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:39.252 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:32:39.252 POWER: Cannot set governor of lcore 0 to userspace 00:32:39.252 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:32:39.252 POWER: Cannot set governor of lcore 0 to performance 00:32:39.252 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:32:39.252 POWER: Cannot set governor of lcore 0 to userspace 00:32:39.252 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:32:39.252 POWER: Cannot set governor of lcore 0 to userspace 00:32:39.252 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:32:39.252 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:32:39.252 POWER: Unable to set Power Management Environment for lcore 0 00:32:39.252 [2024-12-09 05:23:26.053095] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:32:39.252 [2024-12-09 05:23:26.053125] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:32:39.252 [2024-12-09 05:23:26.053140] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:32:39.252 [2024-12-09 05:23:26.053189] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:32:39.252 [2024-12-09 05:23:26.053207] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:32:39.252 [2024-12-09 05:23:26.053222] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.252 05:23:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.252 05:23:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:39.510 [2024-12-09 05:23:26.435063] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:32:39.510 05:23:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.510 05:23:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:32:39.510 05:23:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:39.510 05:23:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.510 05:23:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:39.510 ************************************ 00:32:39.510 START TEST scheduler_create_thread 00:32:39.510 ************************************ 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.510 2 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.510 3 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.510 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 4 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 5 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 6 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 7 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 8 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 9 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 10 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:32:39.768 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.769 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:39.769 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.769 05:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:32:39.769 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.769 05:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:41.140 05:23:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.140 05:23:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:32:41.140 05:23:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:32:41.140 05:23:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.140 05:23:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:42.512 ************************************ 00:32:42.512 END TEST scheduler_create_thread 00:32:42.512 ************************************ 00:32:42.512 05:23:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.512 00:32:42.512 real 0m2.623s 00:32:42.512 user 0m0.020s 00:32:42.512 sys 0m0.006s 00:32:42.512 05:23:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.512 05:23:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:42.512 05:23:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:32:42.512 05:23:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58179 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58179 ']' 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58179 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58179 00:32:42.512 killing process with pid 58179 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58179' 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58179 00:32:42.512 05:23:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58179 00:32:42.769 [2024-12-09 05:23:29.549796] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:32:44.148 00:32:44.148 real 0m6.092s 00:32:44.148 user 0m10.201s 00:32:44.148 sys 0m0.636s 00:32:44.148 05:23:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.148 05:23:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:44.148 ************************************ 00:32:44.148 END TEST event_scheduler 00:32:44.148 ************************************ 00:32:44.148 05:23:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:32:44.148 05:23:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:32:44.148 05:23:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:44.148 05:23:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.148 05:23:30 event -- common/autotest_common.sh@10 -- # set +x 00:32:44.148 ************************************ 00:32:44.148 START TEST app_repeat 00:32:44.148 ************************************ 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58290 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:32:44.148 Process app_repeat pid: 58290 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58290' 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:32:44.148 spdk_app_start Round 0 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:32:44.148 05:23:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58290 /var/tmp/spdk-nbd.sock 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58290 ']' 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:44.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.148 05:23:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:32:44.148 [2024-12-09 05:23:30.957938] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:32:44.148 [2024-12-09 05:23:30.958133] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:32:44.408 [2024-12-09 05:23:31.156472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:44.408 [2024-12-09 05:23:31.343244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.408 [2024-12-09 05:23:31.343245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.976 05:23:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.976 05:23:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:32:44.976 05:23:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:45.236 Malloc0 00:32:45.494 05:23:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:45.752 Malloc1 00:32:45.753 05:23:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:45.753 05:23:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:32:46.011 /dev/nbd0 00:32:46.011 05:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:46.011 05:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:46.011 1+0 records in 00:32:46.011 1+0 records out 00:32:46.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269589 s, 15.2 MB/s 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:46.011 05:23:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:46.011 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:46.011 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:46.011 05:23:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:32:46.269 /dev/nbd1 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:46.269 1+0 records in 00:32:46.269 1+0 records out 00:32:46.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286502 s, 14.3 MB/s 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:46.269 05:23:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:46.269 05:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:46.528 05:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:46.528 { 00:32:46.528 "nbd_device": "/dev/nbd0", 00:32:46.528 "bdev_name": "Malloc0" 00:32:46.528 }, 00:32:46.528 { 00:32:46.528 "nbd_device": "/dev/nbd1", 00:32:46.528 "bdev_name": "Malloc1" 00:32:46.528 } 00:32:46.528 ]' 00:32:46.528 05:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:46.528 { 00:32:46.528 "nbd_device": "/dev/nbd0", 00:32:46.528 "bdev_name": "Malloc0" 00:32:46.528 }, 00:32:46.528 { 00:32:46.528 "nbd_device": "/dev/nbd1", 00:32:46.528 "bdev_name": "Malloc1" 00:32:46.528 } 00:32:46.528 ]' 00:32:46.528 05:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:46.786 /dev/nbd1' 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:46.786 /dev/nbd1' 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:32:46.786 256+0 records in 00:32:46.786 256+0 records out 00:32:46.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781834 s, 134 MB/s 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:46.786 05:23:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:46.786 256+0 records in 00:32:46.786 256+0 records out 00:32:46.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246836 s, 42.5 MB/s 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:46.787 256+0 records in 00:32:46.787 256+0 records out 00:32:46.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301088 s, 34.8 MB/s 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:46.787 05:23:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:47.045 05:23:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:47.303 05:23:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:47.561 05:23:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:47.821 05:23:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:32:47.821 05:23:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:32:48.387 05:23:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:32:49.760 [2024-12-09 05:23:36.355957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:49.760 [2024-12-09 05:23:36.486007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.760 [2024-12-09 05:23:36.486034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.760 [2024-12-09 05:23:36.690294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:32:49.760 [2024-12-09 05:23:36.690463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:32:51.661 05:23:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:32:51.661 spdk_app_start Round 1 00:32:51.661 05:23:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:32:51.661 05:23:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58290 /var/tmp/spdk-nbd.sock 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58290 ']' 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.661 05:23:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:32:51.661 05:23:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:51.918 Malloc0 00:32:51.918 05:23:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:52.483 Malloc1 00:32:52.483 05:23:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.483 05:23:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:32:52.741 /dev/nbd0 00:32:52.741 05:23:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:52.741 05:23:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:52.741 1+0 records in 00:32:52.741 1+0 records out 00:32:52.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320104 s, 12.8 MB/s 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:52.741 05:23:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:52.741 05:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:52.741 05:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.741 05:23:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:32:52.999 /dev/nbd1 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:52.999 1+0 records in 00:32:52.999 1+0 records out 00:32:52.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240683 s, 17.0 MB/s 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:52.999 05:23:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:52.999 05:23:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:53.257 { 00:32:53.257 "nbd_device": "/dev/nbd0", 00:32:53.257 "bdev_name": "Malloc0" 00:32:53.257 }, 00:32:53.257 { 00:32:53.257 "nbd_device": "/dev/nbd1", 00:32:53.257 "bdev_name": "Malloc1" 00:32:53.257 } 00:32:53.257 ]' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:53.257 { 00:32:53.257 "nbd_device": "/dev/nbd0", 00:32:53.257 "bdev_name": "Malloc0" 00:32:53.257 }, 00:32:53.257 { 00:32:53.257 "nbd_device": "/dev/nbd1", 00:32:53.257 "bdev_name": "Malloc1" 00:32:53.257 } 00:32:53.257 ]' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:53.257 /dev/nbd1' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:53.257 /dev/nbd1' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:32:53.257 256+0 records in 00:32:53.257 256+0 records out 00:32:53.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00793257 s, 132 MB/s 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:53.257 256+0 records in 00:32:53.257 256+0 records out 00:32:53.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260187 s, 40.3 MB/s 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:53.257 256+0 records in 00:32:53.257 256+0 records out 00:32:53.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297114 s, 35.3 MB/s 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.257 05:23:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.822 05:23:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:54.081 05:23:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:54.339 05:23:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:32:54.339 05:23:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:32:54.906 05:23:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:32:56.281 [2024-12-09 05:23:42.859139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:56.281 [2024-12-09 05:23:43.003228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.281 [2024-12-09 05:23:43.003231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.281 [2024-12-09 05:23:43.229789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:32:56.281 [2024-12-09 05:23:43.229881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:32:58.207 spdk_app_start Round 2 00:32:58.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:58.207 05:23:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:32:58.207 05:23:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:32:58.207 05:23:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58290 /var/tmp/spdk-nbd.sock 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58290 ']' 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.207 05:23:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:32:58.207 05:23:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:58.468 Malloc0 00:32:58.468 05:23:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:59.045 Malloc1 00:32:59.045 05:23:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:59.045 05:23:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:32:59.302 /dev/nbd0 00:32:59.302 05:23:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:59.302 05:23:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:59.302 1+0 records in 00:32:59.302 1+0 records out 00:32:59.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318614 s, 12.9 MB/s 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:59.302 05:23:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:59.302 05:23:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:59.302 05:23:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:59.302 05:23:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:32:59.560 /dev/nbd1 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:59.560 1+0 records in 00:32:59.560 1+0 records out 00:32:59.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260373 s, 15.7 MB/s 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:59.560 05:23:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:59.560 05:23:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:00.125 { 00:33:00.125 "nbd_device": "/dev/nbd0", 00:33:00.125 "bdev_name": "Malloc0" 00:33:00.125 }, 00:33:00.125 { 00:33:00.125 "nbd_device": "/dev/nbd1", 00:33:00.125 "bdev_name": "Malloc1" 00:33:00.125 } 00:33:00.125 ]' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:00.125 { 00:33:00.125 "nbd_device": "/dev/nbd0", 00:33:00.125 "bdev_name": "Malloc0" 00:33:00.125 }, 00:33:00.125 { 00:33:00.125 "nbd_device": "/dev/nbd1", 00:33:00.125 "bdev_name": "Malloc1" 00:33:00.125 } 00:33:00.125 ]' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:00.125 /dev/nbd1' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:00.125 /dev/nbd1' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:33:00.125 256+0 records in 00:33:00.125 256+0 records out 00:33:00.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00764053 s, 137 MB/s 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:00.125 256+0 records in 00:33:00.125 256+0 records out 00:33:00.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292065 s, 35.9 MB/s 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:00.125 05:23:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:00.125 256+0 records in 00:33:00.125 256+0 records out 00:33:00.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031903 s, 32.9 MB/s 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:00.125 05:23:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:00.689 05:23:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.947 05:23:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:01.204 05:23:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:33:01.204 05:23:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:33:01.769 05:23:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:33:03.141 [2024-12-09 05:23:49.894730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:03.141 [2024-12-09 05:23:50.052440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.141 [2024-12-09 05:23:50.052450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.400 [2024-12-09 05:23:50.285764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:33:03.400 [2024-12-09 05:23:50.286192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:04.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:04.774 05:23:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58290 /var/tmp/spdk-nbd.sock 00:33:04.774 05:23:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58290 ']' 00:33:04.774 05:23:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:04.774 05:23:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.774 05:23:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:04.774 05:23:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.774 05:23:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:05.032 05:23:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.032 05:23:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:05.032 05:23:51 event.app_repeat -- event/event.sh@39 -- # killprocess 58290 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58290 ']' 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58290 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58290 00:33:05.033 killing process with pid 58290 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58290' 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58290 00:33:05.033 05:23:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58290 00:33:06.423 spdk_app_start is called in Round 0. 00:33:06.423 Shutdown signal received, stop current app iteration 00:33:06.423 Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 reinitialization... 00:33:06.423 spdk_app_start is called in Round 1. 00:33:06.423 Shutdown signal received, stop current app iteration 00:33:06.423 Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 reinitialization... 00:33:06.423 spdk_app_start is called in Round 2. 00:33:06.423 Shutdown signal received, stop current app iteration 00:33:06.423 Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 reinitialization... 00:33:06.423 spdk_app_start is called in Round 3. 00:33:06.423 Shutdown signal received, stop current app iteration 00:33:06.423 05:23:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:33:06.423 05:23:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:33:06.423 00:33:06.423 real 0m22.093s 00:33:06.423 user 0m48.319s 00:33:06.423 sys 0m3.503s 00:33:06.423 05:23:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.423 05:23:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:06.423 ************************************ 00:33:06.423 END TEST app_repeat 00:33:06.423 ************************************ 00:33:06.423 05:23:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:33:06.423 05:23:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:33:06.423 05:23:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:06.423 05:23:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.423 05:23:53 event -- common/autotest_common.sh@10 -- # set +x 00:33:06.423 ************************************ 00:33:06.423 START TEST cpu_locks 00:33:06.423 ************************************ 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:33:06.423 * Looking for test storage... 00:33:06.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.423 05:23:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.423 --rc genhtml_branch_coverage=1 00:33:06.423 --rc genhtml_function_coverage=1 00:33:06.423 --rc genhtml_legend=1 00:33:06.423 --rc geninfo_all_blocks=1 00:33:06.423 --rc geninfo_unexecuted_blocks=1 00:33:06.423 00:33:06.423 ' 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.423 --rc genhtml_branch_coverage=1 00:33:06.423 --rc genhtml_function_coverage=1 00:33:06.423 --rc genhtml_legend=1 00:33:06.423 --rc geninfo_all_blocks=1 00:33:06.423 --rc geninfo_unexecuted_blocks=1 00:33:06.423 00:33:06.423 ' 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.423 --rc genhtml_branch_coverage=1 00:33:06.423 --rc genhtml_function_coverage=1 00:33:06.423 --rc genhtml_legend=1 00:33:06.423 --rc geninfo_all_blocks=1 00:33:06.423 --rc geninfo_unexecuted_blocks=1 00:33:06.423 00:33:06.423 ' 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.423 --rc genhtml_branch_coverage=1 00:33:06.423 --rc genhtml_function_coverage=1 00:33:06.423 --rc genhtml_legend=1 00:33:06.423 --rc geninfo_all_blocks=1 00:33:06.423 --rc geninfo_unexecuted_blocks=1 00:33:06.423 00:33:06.423 ' 00:33:06.423 05:23:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:33:06.423 05:23:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:33:06.423 05:23:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:33:06.423 05:23:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.423 05:23:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:06.423 ************************************ 00:33:06.423 START TEST default_locks 00:33:06.423 ************************************ 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58765 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58765 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58765 ']' 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.423 05:23:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:06.423 [2024-12-09 05:23:53.388788] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:06.423 [2024-12-09 05:23:53.389001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58765 ] 00:33:06.680 [2024-12-09 05:23:53.586360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.937 [2024-12-09 05:23:53.766743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.868 05:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.868 05:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:33:07.868 05:23:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58765 00:33:07.868 05:23:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58765 00:33:07.868 05:23:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:08.125 05:23:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58765 00:33:08.125 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58765 ']' 00:33:08.125 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58765 00:33:08.125 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:33:08.125 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.125 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58765 00:33:08.443 killing process with pid 58765 00:33:08.443 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:08.443 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:08.443 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58765' 00:33:08.443 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58765 00:33:08.443 05:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58765 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58765 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58765 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58765 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58765 ']' 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.977 ERROR: process (pid: 58765) is no longer running 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:10.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58765) - No such process 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:33:10.977 00:33:10.977 real 0m4.221s 00:33:10.977 user 0m4.181s 00:33:10.977 sys 0m0.808s 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.977 05:23:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:10.977 ************************************ 00:33:10.977 END TEST default_locks 00:33:10.977 ************************************ 00:33:10.977 05:23:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:33:10.977 05:23:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.977 05:23:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.977 05:23:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:10.977 ************************************ 00:33:10.977 START TEST default_locks_via_rpc 00:33:10.977 ************************************ 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58841 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58841 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.977 05:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:10.977 [2024-12-09 05:23:57.676716] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:10.977 [2024-12-09 05:23:57.676967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:33:10.977 [2024-12-09 05:23:57.872529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.236 [2024-12-09 05:23:58.046609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58841 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58841 00:33:12.180 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58841 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58841 ']' 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58841 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58841 00:33:12.755 killing process with pid 58841 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58841' 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58841 00:33:12.755 05:23:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58841 00:33:15.297 00:33:15.297 real 0m4.476s 00:33:15.297 user 0m4.364s 00:33:15.297 sys 0m0.856s 00:33:15.297 05:24:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.297 05:24:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:15.297 ************************************ 00:33:15.297 END TEST default_locks_via_rpc 00:33:15.297 ************************************ 00:33:15.297 05:24:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:33:15.297 05:24:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:15.297 05:24:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.297 05:24:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:15.297 ************************************ 00:33:15.297 START TEST non_locking_app_on_locked_coremask 00:33:15.297 ************************************ 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:33:15.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58922 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58922 /var/tmp/spdk.sock 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58922 ']' 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.297 05:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:15.297 [2024-12-09 05:24:02.217357] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:15.297 [2024-12-09 05:24:02.217560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:33:15.555 [2024-12-09 05:24:02.416566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.813 [2024-12-09 05:24:02.583211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58938 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58938 /var/tmp/spdk2.sock 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58938 ']' 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.748 05:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:16.748 [2024-12-09 05:24:03.608831] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:16.748 [2024-12-09 05:24:03.608999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:33:17.008 [2024-12-09 05:24:03.808971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:17.008 [2024-12-09 05:24:03.809055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.266 [2024-12-09 05:24:04.156414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.794 05:24:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.794 05:24:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:19.794 05:24:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58922 00:33:19.794 05:24:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58922 00:33:19.794 05:24:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58922 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58922 ']' 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58922 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58922 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.416 killing process with pid 58922 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58922' 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58922 00:33:20.416 05:24:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58922 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58938 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58938 ']' 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58938 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58938 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:25.685 killing process with pid 58938 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58938' 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58938 00:33:25.685 05:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58938 00:33:27.058 00:33:27.058 real 0m11.667s 00:33:27.058 user 0m12.107s 00:33:27.058 sys 0m1.690s 00:33:27.058 05:24:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.058 05:24:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:27.058 ************************************ 00:33:27.058 END TEST non_locking_app_on_locked_coremask 00:33:27.058 ************************************ 00:33:27.058 05:24:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:33:27.058 05:24:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:27.058 05:24:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.058 05:24:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:27.058 ************************************ 00:33:27.058 START TEST locking_app_on_unlocked_coremask 00:33:27.058 ************************************ 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59089 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59089 /var/tmp/spdk.sock 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59089 ']' 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.058 05:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:27.058 [2024-12-09 05:24:13.897474] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:27.058 [2024-12-09 05:24:13.897649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:33:27.316 [2024-12-09 05:24:14.072295] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:27.316 [2024-12-09 05:24:14.072362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.316 [2024-12-09 05:24:14.209609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59111 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59111 /var/tmp/spdk2.sock 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59111 ']' 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.254 05:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:28.254 [2024-12-09 05:24:15.129611] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:28.254 [2024-12-09 05:24:15.129786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59111 ] 00:33:28.518 [2024-12-09 05:24:15.322365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.780 [2024-12-09 05:24:15.595746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.321 05:24:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.321 05:24:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:31.321 05:24:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59111 00:33:31.321 05:24:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:31.321 05:24:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59111 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59089 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59089 ']' 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59089 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59089 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.886 killing process with pid 59089 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59089' 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59089 00:33:31.886 05:24:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59089 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59111 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59111 ']' 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59111 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59111 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.149 killing process with pid 59111 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59111' 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59111 00:33:37.149 05:24:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59111 00:33:39.051 00:33:39.051 real 0m12.145s 00:33:39.051 user 0m12.511s 00:33:39.051 sys 0m1.687s 00:33:39.051 05:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.051 05:24:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:39.051 ************************************ 00:33:39.051 END TEST locking_app_on_unlocked_coremask 00:33:39.051 ************************************ 00:33:39.051 05:24:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:33:39.051 05:24:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:39.051 05:24:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.051 05:24:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:39.051 ************************************ 00:33:39.051 START TEST locking_app_on_locked_coremask 00:33:39.051 ************************************ 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59264 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59264 /var/tmp/spdk.sock 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59264 ']' 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.051 05:24:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:39.309 [2024-12-09 05:24:26.142995] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:39.309 [2024-12-09 05:24:26.143233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:33:39.568 [2024-12-09 05:24:26.333425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.568 [2024-12-09 05:24:26.484288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.505 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.505 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:40.505 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59286 00:33:40.505 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:33:40.505 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59286 /var/tmp/spdk2.sock 00:33:40.764 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:33:40.764 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59286 /var/tmp/spdk2.sock 00:33:40.764 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59286 /var/tmp/spdk2.sock 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59286 ']' 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.765 05:24:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:40.765 [2024-12-09 05:24:27.614587] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:40.765 [2024-12-09 05:24:27.614819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59286 ] 00:33:41.023 [2024-12-09 05:24:27.827620] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59264 has claimed it. 00:33:41.023 [2024-12-09 05:24:27.827764] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:33:41.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59286) - No such process 00:33:41.590 ERROR: process (pid: 59286) is no longer running 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59264 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59264 00:33:41.590 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59264 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59264 ']' 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59264 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59264 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.849 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.849 killing process with pid 59264 00:33:41.850 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59264' 00:33:41.850 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59264 00:33:41.850 05:24:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59264 00:33:44.381 00:33:44.382 real 0m5.302s 00:33:44.382 user 0m5.572s 00:33:44.382 sys 0m1.084s 00:33:44.382 05:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.382 05:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:44.382 ************************************ 00:33:44.382 END TEST locking_app_on_locked_coremask 00:33:44.382 ************************************ 00:33:44.382 05:24:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:33:44.382 05:24:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:44.382 05:24:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.382 05:24:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:44.640 ************************************ 00:33:44.640 START TEST locking_overlapped_coremask 00:33:44.640 ************************************ 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59361 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59361 /var/tmp/spdk.sock 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59361 ']' 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.640 05:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:44.640 [2024-12-09 05:24:31.473731] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:44.640 [2024-12-09 05:24:31.473902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59361 ] 00:33:44.901 [2024-12-09 05:24:31.653762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:44.901 [2024-12-09 05:24:31.804785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.901 [2024-12-09 05:24:31.804877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.901 [2024-12-09 05:24:31.804896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59379 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59379 /var/tmp/spdk2.sock 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59379 /var/tmp/spdk2.sock 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59379 /var/tmp/spdk2.sock 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59379 ']' 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.278 05:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:46.278 [2024-12-09 05:24:32.948567] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:46.278 [2024-12-09 05:24:32.948844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59379 ] 00:33:46.278 [2024-12-09 05:24:33.157178] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59361 has claimed it. 00:33:46.278 [2024-12-09 05:24:33.157267] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:33:46.845 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59379) - No such process 00:33:46.845 ERROR: process (pid: 59379) is no longer running 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59361 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59361 ']' 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59361 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59361 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.845 killing process with pid 59361 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59361' 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59361 00:33:46.845 05:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59361 00:33:49.373 00:33:49.373 real 0m4.683s 00:33:49.373 user 0m12.662s 00:33:49.373 sys 0m0.801s 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:49.373 ************************************ 00:33:49.373 END TEST locking_overlapped_coremask 00:33:49.373 ************************************ 00:33:49.373 05:24:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:33:49.373 05:24:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:49.373 05:24:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.373 05:24:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:49.373 ************************************ 00:33:49.373 START TEST locking_overlapped_coremask_via_rpc 00:33:49.373 ************************************ 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59446 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59446 /var/tmp/spdk.sock 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.373 05:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:49.373 [2024-12-09 05:24:36.218643] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:49.373 [2024-12-09 05:24:36.218831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ] 00:33:49.631 [2024-12-09 05:24:36.401364] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:49.631 [2024-12-09 05:24:36.401418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:49.631 [2024-12-09 05:24:36.560437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.631 [2024-12-09 05:24:36.560565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.631 [2024-12-09 05:24:36.560578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59471 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59471 /var/tmp/spdk2.sock 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59471 ']' 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.565 05:24:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:50.824 [2024-12-09 05:24:37.619736] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:50.824 [2024-12-09 05:24:37.619942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59471 ] 00:33:51.082 [2024-12-09 05:24:37.833757] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:51.082 [2024-12-09 05:24:37.833858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:51.340 [2024-12-09 05:24:38.144230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:51.340 [2024-12-09 05:24:38.144346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.340 [2024-12-09 05:24:38.144387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:53.870 [2024-12-09 05:24:40.403046] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59446 has claimed it. 00:33:53.870 request: 00:33:53.870 { 00:33:53.870 "method": "framework_enable_cpumask_locks", 00:33:53.870 "req_id": 1 00:33:53.870 } 00:33:53.870 Got JSON-RPC error response 00:33:53.870 response: 00:33:53.870 { 00:33:53.870 "code": -32603, 00:33:53.870 "message": "Failed to claim CPU core: 2" 00:33:53.870 } 00:33:53.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59446 /var/tmp/spdk.sock 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59471 /var/tmp/spdk2.sock 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59471 ']' 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:53.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.870 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:33:54.132 00:33:54.132 real 0m4.886s 00:33:54.132 user 0m1.735s 00:33:54.132 sys 0m0.263s 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.132 ************************************ 00:33:54.132 END TEST locking_overlapped_coremask_via_rpc 00:33:54.132 ************************************ 00:33:54.132 05:24:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:54.132 05:24:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:33:54.132 05:24:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59446 ]] 00:33:54.132 05:24:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59446 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59446 ']' 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59446 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59446 00:33:54.132 killing process with pid 59446 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59446' 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59446 00:33:54.132 05:24:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59446 00:33:56.659 05:24:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59471 ]] 00:33:56.659 05:24:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59471 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59471 ']' 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59471 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59471 00:33:56.659 killing process with pid 59471 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59471' 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59471 00:33:56.659 05:24:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59471 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59446 ]] 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59446 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59446 ']' 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59446 00:33:59.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59446) - No such process 00:33:59.190 Process with pid 59446 is not found 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59446 is not found' 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59471 ]] 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59471 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59471 ']' 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59471 00:33:59.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59471) - No such process 00:33:59.190 Process with pid 59471 is not found 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59471 is not found' 00:33:59.190 05:24:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:33:59.190 00:33:59.190 real 0m52.732s 00:33:59.190 user 1m29.763s 00:33:59.190 sys 0m8.657s 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.190 05:24:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:59.190 ************************************ 00:33:59.190 END TEST cpu_locks 00:33:59.190 ************************************ 00:33:59.190 ************************************ 00:33:59.190 END TEST event 00:33:59.190 ************************************ 00:33:59.190 00:33:59.190 real 1m26.585s 00:33:59.190 user 2m35.906s 00:33:59.190 sys 0m13.455s 00:33:59.190 05:24:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.190 05:24:45 event -- common/autotest_common.sh@10 -- # set +x 00:33:59.190 05:24:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:33:59.190 05:24:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:59.190 05:24:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.190 05:24:45 -- common/autotest_common.sh@10 -- # set +x 00:33:59.190 ************************************ 00:33:59.190 START TEST thread 00:33:59.190 ************************************ 00:33:59.190 05:24:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:33:59.190 * Looking for test storage... 00:33:59.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:33:59.190 05:24:45 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:59.190 05:24:45 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:33:59.190 05:24:45 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:59.190 05:24:46 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:59.190 05:24:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.190 05:24:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.190 05:24:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.190 05:24:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.190 05:24:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.190 05:24:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.190 05:24:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.190 05:24:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.190 05:24:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.190 05:24:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.190 05:24:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.190 05:24:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:33:59.190 05:24:46 thread -- scripts/common.sh@345 -- # : 1 00:33:59.190 05:24:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.190 05:24:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.190 05:24:46 thread -- scripts/common.sh@365 -- # decimal 1 00:33:59.190 05:24:46 thread -- scripts/common.sh@353 -- # local d=1 00:33:59.190 05:24:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.190 05:24:46 thread -- scripts/common.sh@355 -- # echo 1 00:33:59.190 05:24:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.190 05:24:46 thread -- scripts/common.sh@366 -- # decimal 2 00:33:59.190 05:24:46 thread -- scripts/common.sh@353 -- # local d=2 00:33:59.190 05:24:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.190 05:24:46 thread -- scripts/common.sh@355 -- # echo 2 00:33:59.190 05:24:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.190 05:24:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.190 05:24:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.190 05:24:46 thread -- scripts/common.sh@368 -- # return 0 00:33:59.190 05:24:46 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.190 05:24:46 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.190 --rc genhtml_branch_coverage=1 00:33:59.190 --rc genhtml_function_coverage=1 00:33:59.190 --rc genhtml_legend=1 00:33:59.190 --rc geninfo_all_blocks=1 00:33:59.190 --rc geninfo_unexecuted_blocks=1 00:33:59.190 00:33:59.190 ' 00:33:59.190 05:24:46 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.190 --rc genhtml_branch_coverage=1 00:33:59.190 --rc genhtml_function_coverage=1 00:33:59.190 --rc genhtml_legend=1 00:33:59.190 --rc geninfo_all_blocks=1 00:33:59.190 --rc geninfo_unexecuted_blocks=1 00:33:59.190 00:33:59.190 ' 00:33:59.190 05:24:46 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.190 --rc genhtml_branch_coverage=1 00:33:59.190 --rc genhtml_function_coverage=1 00:33:59.190 --rc genhtml_legend=1 00:33:59.190 --rc geninfo_all_blocks=1 00:33:59.190 --rc geninfo_unexecuted_blocks=1 00:33:59.190 00:33:59.190 ' 00:33:59.191 05:24:46 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.191 --rc genhtml_branch_coverage=1 00:33:59.191 --rc genhtml_function_coverage=1 00:33:59.191 --rc genhtml_legend=1 00:33:59.191 --rc geninfo_all_blocks=1 00:33:59.191 --rc geninfo_unexecuted_blocks=1 00:33:59.191 00:33:59.191 ' 00:33:59.191 05:24:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:33:59.191 05:24:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:33:59.191 05:24:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.191 05:24:46 thread -- common/autotest_common.sh@10 -- # set +x 00:33:59.191 ************************************ 00:33:59.191 START TEST thread_poller_perf 00:33:59.191 ************************************ 00:33:59.191 05:24:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:33:59.191 [2024-12-09 05:24:46.120335] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:33:59.191 [2024-12-09 05:24:46.120505] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59671 ] 00:33:59.448 [2024-12-09 05:24:46.316197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.706 [2024-12-09 05:24:46.483648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.706 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:34:01.080 [2024-12-09T05:24:48.052Z] ====================================== 00:34:01.080 [2024-12-09T05:24:48.052Z] busy:2213486242 (cyc) 00:34:01.080 [2024-12-09T05:24:48.052Z] total_run_count: 336000 00:34:01.080 [2024-12-09T05:24:48.052Z] tsc_hz: 2200000000 (cyc) 00:34:01.080 [2024-12-09T05:24:48.052Z] ====================================== 00:34:01.080 [2024-12-09T05:24:48.052Z] poller_cost: 6587 (cyc), 2994 (nsec) 00:34:01.080 00:34:01.080 real 0m1.724s 00:34:01.080 user 0m1.501s 00:34:01.080 sys 0m0.114s 00:34:01.080 05:24:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.080 05:24:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:34:01.080 ************************************ 00:34:01.080 END TEST thread_poller_perf 00:34:01.080 ************************************ 00:34:01.080 05:24:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:34:01.080 05:24:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:34:01.080 05:24:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.080 05:24:47 thread -- common/autotest_common.sh@10 -- # set +x 00:34:01.080 ************************************ 00:34:01.080 START TEST thread_poller_perf 00:34:01.080 ************************************ 00:34:01.080 05:24:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:34:01.080 [2024-12-09 05:24:47.895313] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:01.080 [2024-12-09 05:24:47.895504] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:34:01.339 [2024-12-09 05:24:48.081010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.339 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:34:01.339 [2024-12-09 05:24:48.209633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.711 [2024-12-09T05:24:49.683Z] ====================================== 00:34:02.711 [2024-12-09T05:24:49.683Z] busy:2204038112 (cyc) 00:34:02.711 [2024-12-09T05:24:49.683Z] total_run_count: 4335000 00:34:02.711 [2024-12-09T05:24:49.683Z] tsc_hz: 2200000000 (cyc) 00:34:02.711 [2024-12-09T05:24:49.683Z] ====================================== 00:34:02.711 [2024-12-09T05:24:49.683Z] poller_cost: 508 (cyc), 230 (nsec) 00:34:02.711 00:34:02.711 real 0m1.682s 00:34:02.711 user 0m1.459s 00:34:02.711 sys 0m0.114s 00:34:02.711 05:24:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.711 05:24:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:34:02.711 ************************************ 00:34:02.711 END TEST thread_poller_perf 00:34:02.711 ************************************ 00:34:02.711 05:24:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:34:02.711 00:34:02.711 real 0m3.705s 00:34:02.711 user 0m3.115s 00:34:02.711 sys 0m0.375s 00:34:02.711 05:24:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.711 05:24:49 thread -- common/autotest_common.sh@10 -- # set +x 00:34:02.711 ************************************ 00:34:02.711 END TEST thread 00:34:02.711 ************************************ 00:34:02.711 05:24:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:34:02.711 05:24:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:34:02.711 05:24:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:02.711 05:24:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.711 05:24:49 -- common/autotest_common.sh@10 -- # set +x 00:34:02.711 ************************************ 00:34:02.711 START TEST app_cmdline 00:34:02.711 ************************************ 00:34:02.711 05:24:49 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:34:03.007 * Looking for test storage... 00:34:03.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.007 05:24:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:03.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.007 --rc genhtml_branch_coverage=1 00:34:03.007 --rc genhtml_function_coverage=1 00:34:03.007 --rc genhtml_legend=1 00:34:03.007 --rc geninfo_all_blocks=1 00:34:03.007 --rc geninfo_unexecuted_blocks=1 00:34:03.007 00:34:03.007 ' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:03.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.007 --rc genhtml_branch_coverage=1 00:34:03.007 --rc genhtml_function_coverage=1 00:34:03.007 --rc genhtml_legend=1 00:34:03.007 --rc geninfo_all_blocks=1 00:34:03.007 --rc geninfo_unexecuted_blocks=1 00:34:03.007 00:34:03.007 ' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:03.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.007 --rc genhtml_branch_coverage=1 00:34:03.007 --rc genhtml_function_coverage=1 00:34:03.007 --rc genhtml_legend=1 00:34:03.007 --rc geninfo_all_blocks=1 00:34:03.007 --rc geninfo_unexecuted_blocks=1 00:34:03.007 00:34:03.007 ' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:03.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.007 --rc genhtml_branch_coverage=1 00:34:03.007 --rc genhtml_function_coverage=1 00:34:03.007 --rc genhtml_legend=1 00:34:03.007 --rc geninfo_all_blocks=1 00:34:03.007 --rc geninfo_unexecuted_blocks=1 00:34:03.007 00:34:03.007 ' 00:34:03.007 05:24:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:34:03.007 05:24:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59797 00:34:03.007 05:24:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59797 00:34:03.007 05:24:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59797 ']' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.007 05:24:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:34:03.007 [2024-12-09 05:24:49.965458] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:03.007 [2024-12-09 05:24:49.966250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59797 ] 00:34:03.280 [2024-12-09 05:24:50.151692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.538 [2024-12-09 05:24:50.295682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.472 05:24:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.472 05:24:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:34:04.472 05:24:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:34:04.730 { 00:34:04.730 "version": "SPDK v25.01-pre git sha1 afe42438a", 00:34:04.730 "fields": { 00:34:04.730 "major": 25, 00:34:04.730 "minor": 1, 00:34:04.730 "patch": 0, 00:34:04.730 "suffix": "-pre", 00:34:04.730 "commit": "afe42438a" 00:34:04.730 } 00:34:04.730 } 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:34:04.730 05:24:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.730 05:24:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:34:04.730 05:24:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.730 05:24:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:34:04.731 05:24:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:34:04.731 05:24:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:04.731 05:24:51 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:04.990 request: 00:34:04.990 { 00:34:04.990 "method": "env_dpdk_get_mem_stats", 00:34:04.990 "req_id": 1 00:34:04.990 } 00:34:04.990 Got JSON-RPC error response 00:34:04.990 response: 00:34:04.990 { 00:34:04.990 "code": -32601, 00:34:04.990 "message": "Method not found" 00:34:04.990 } 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:04.990 05:24:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59797 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59797 ']' 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59797 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59797 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.990 killing process with pid 59797 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59797' 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 59797 00:34:04.990 05:24:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 59797 00:34:07.522 00:34:07.522 real 0m4.464s 00:34:07.522 user 0m4.776s 00:34:07.522 sys 0m0.739s 00:34:07.522 05:24:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.522 ************************************ 00:34:07.522 END TEST app_cmdline 00:34:07.522 ************************************ 00:34:07.522 05:24:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:34:07.522 05:24:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:34:07.522 05:24:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:07.522 05:24:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.522 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:34:07.522 ************************************ 00:34:07.522 START TEST version 00:34:07.522 ************************************ 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:34:07.522 * Looking for test storage... 00:34:07.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.522 05:24:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.522 05:24:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.522 05:24:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.522 05:24:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.522 05:24:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.522 05:24:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.522 05:24:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.522 05:24:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.522 05:24:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.522 05:24:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.522 05:24:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.522 05:24:54 version -- scripts/common.sh@344 -- # case "$op" in 00:34:07.522 05:24:54 version -- scripts/common.sh@345 -- # : 1 00:34:07.522 05:24:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.522 05:24:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.522 05:24:54 version -- scripts/common.sh@365 -- # decimal 1 00:34:07.522 05:24:54 version -- scripts/common.sh@353 -- # local d=1 00:34:07.522 05:24:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.522 05:24:54 version -- scripts/common.sh@355 -- # echo 1 00:34:07.522 05:24:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.522 05:24:54 version -- scripts/common.sh@366 -- # decimal 2 00:34:07.522 05:24:54 version -- scripts/common.sh@353 -- # local d=2 00:34:07.522 05:24:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.522 05:24:54 version -- scripts/common.sh@355 -- # echo 2 00:34:07.522 05:24:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.522 05:24:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.522 05:24:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.522 05:24:54 version -- scripts/common.sh@368 -- # return 0 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.522 --rc genhtml_branch_coverage=1 00:34:07.522 --rc genhtml_function_coverage=1 00:34:07.522 --rc genhtml_legend=1 00:34:07.522 --rc geninfo_all_blocks=1 00:34:07.522 --rc geninfo_unexecuted_blocks=1 00:34:07.522 00:34:07.522 ' 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.522 --rc genhtml_branch_coverage=1 00:34:07.522 --rc genhtml_function_coverage=1 00:34:07.522 --rc genhtml_legend=1 00:34:07.522 --rc geninfo_all_blocks=1 00:34:07.522 --rc geninfo_unexecuted_blocks=1 00:34:07.522 00:34:07.522 ' 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.522 --rc genhtml_branch_coverage=1 00:34:07.522 --rc genhtml_function_coverage=1 00:34:07.522 --rc genhtml_legend=1 00:34:07.522 --rc geninfo_all_blocks=1 00:34:07.522 --rc geninfo_unexecuted_blocks=1 00:34:07.522 00:34:07.522 ' 00:34:07.522 05:24:54 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.522 --rc genhtml_branch_coverage=1 00:34:07.522 --rc genhtml_function_coverage=1 00:34:07.522 --rc genhtml_legend=1 00:34:07.522 --rc geninfo_all_blocks=1 00:34:07.522 --rc geninfo_unexecuted_blocks=1 00:34:07.522 00:34:07.522 ' 00:34:07.522 05:24:54 version -- app/version.sh@17 -- # get_header_version major 00:34:07.523 05:24:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # cut -f2 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # tr -d '"' 00:34:07.523 05:24:54 version -- app/version.sh@17 -- # major=25 00:34:07.523 05:24:54 version -- app/version.sh@18 -- # get_header_version minor 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # cut -f2 00:34:07.523 05:24:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # tr -d '"' 00:34:07.523 05:24:54 version -- app/version.sh@18 -- # minor=1 00:34:07.523 05:24:54 version -- app/version.sh@19 -- # get_header_version patch 00:34:07.523 05:24:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # cut -f2 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # tr -d '"' 00:34:07.523 05:24:54 version -- app/version.sh@19 -- # patch=0 00:34:07.523 05:24:54 version -- app/version.sh@20 -- # get_header_version suffix 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # cut -f2 00:34:07.523 05:24:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:07.523 05:24:54 version -- app/version.sh@14 -- # tr -d '"' 00:34:07.523 05:24:54 version -- app/version.sh@20 -- # suffix=-pre 00:34:07.523 05:24:54 version -- app/version.sh@22 -- # version=25.1 00:34:07.523 05:24:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:34:07.523 05:24:54 version -- app/version.sh@28 -- # version=25.1rc0 00:34:07.523 05:24:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:07.523 05:24:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:34:07.523 05:24:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:34:07.523 05:24:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:34:07.523 ************************************ 00:34:07.523 END TEST version 00:34:07.523 ************************************ 00:34:07.523 00:34:07.523 real 0m0.252s 00:34:07.523 user 0m0.155s 00:34:07.523 sys 0m0.140s 00:34:07.523 05:24:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.523 05:24:54 version -- common/autotest_common.sh@10 -- # set +x 00:34:07.523 05:24:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:34:07.523 05:24:54 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:34:07.523 05:24:54 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:34:07.523 05:24:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:07.523 05:24:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.523 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:34:07.523 ************************************ 00:34:07.523 START TEST bdev_raid 00:34:07.523 ************************************ 00:34:07.523 05:24:54 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:34:07.782 * Looking for test storage... 00:34:07.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@345 -- # : 1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.782 05:24:54 bdev_raid -- scripts/common.sh@368 -- # return 0 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.782 --rc genhtml_branch_coverage=1 00:34:07.782 --rc genhtml_function_coverage=1 00:34:07.782 --rc genhtml_legend=1 00:34:07.782 --rc geninfo_all_blocks=1 00:34:07.782 --rc geninfo_unexecuted_blocks=1 00:34:07.782 00:34:07.782 ' 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.782 --rc genhtml_branch_coverage=1 00:34:07.782 --rc genhtml_function_coverage=1 00:34:07.782 --rc genhtml_legend=1 00:34:07.782 --rc geninfo_all_blocks=1 00:34:07.782 --rc geninfo_unexecuted_blocks=1 00:34:07.782 00:34:07.782 ' 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.782 --rc genhtml_branch_coverage=1 00:34:07.782 --rc genhtml_function_coverage=1 00:34:07.782 --rc genhtml_legend=1 00:34:07.782 --rc geninfo_all_blocks=1 00:34:07.782 --rc geninfo_unexecuted_blocks=1 00:34:07.782 00:34:07.782 ' 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.782 --rc genhtml_branch_coverage=1 00:34:07.782 --rc genhtml_function_coverage=1 00:34:07.782 --rc genhtml_legend=1 00:34:07.782 --rc geninfo_all_blocks=1 00:34:07.782 --rc geninfo_unexecuted_blocks=1 00:34:07.782 00:34:07.782 ' 00:34:07.782 05:24:54 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:07.782 05:24:54 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:34:07.782 05:24:54 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:34:07.782 05:24:54 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:34:07.782 05:24:54 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:34:07.782 05:24:54 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:34:07.782 05:24:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.782 05:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:07.782 ************************************ 00:34:07.782 START TEST raid1_resize_data_offset_test 00:34:07.782 ************************************ 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59984 00:34:07.782 Process raid pid: 59984 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59984' 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59984 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59984 ']' 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:07.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:07.782 05:24:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:08.040 [2024-12-09 05:24:54.793478] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:08.040 [2024-12-09 05:24:54.794360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.040 [2024-12-09 05:24:54.987696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.307 [2024-12-09 05:24:55.122494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.570 [2024-12-09 05:24:55.336706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:08.570 [2024-12-09 05:24:55.336791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:08.829 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.829 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:34:08.829 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:34:08.829 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.829 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.087 malloc0 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.087 malloc1 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.087 null0 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.087 [2024-12-09 05:24:55.913230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:34:09.087 [2024-12-09 05:24:55.916015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:09.087 [2024-12-09 05:24:55.916195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:34:09.087 [2024-12-09 05:24:55.916407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:09.087 [2024-12-09 05:24:55.916444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:34:09.087 [2024-12-09 05:24:55.916847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:34:09.087 [2024-12-09 05:24:55.917074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:09.087 [2024-12-09 05:24:55.917106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:34:09.087 [2024-12-09 05:24:55.917394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.087 [2024-12-09 05:24:55.973340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.087 05:24:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.654 malloc2 00:34:09.654 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.654 05:24:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:34:09.654 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.654 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.654 [2024-12-09 05:24:56.489966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:09.654 [2024-12-09 05:24:56.508336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:09.654 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.655 [2024-12-09 05:24:56.511059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59984 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59984 ']' 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59984 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59984 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.655 killing process with pid 59984 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59984' 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59984 00:34:09.655 05:24:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59984 00:34:09.655 [2024-12-09 05:24:56.599209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.655 [2024-12-09 05:24:56.599609] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:34:09.655 [2024-12-09 05:24:56.599680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:09.655 [2024-12-09 05:24:56.599707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:34:09.913 [2024-12-09 05:24:56.630976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.913 [2024-12-09 05:24:56.631404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.913 [2024-12-09 05:24:56.631438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:34:11.288 [2024-12-09 05:24:58.181931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:12.686 05:24:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:34:12.686 00:34:12.686 real 0m4.635s 00:34:12.686 user 0m4.492s 00:34:12.686 sys 0m0.712s 00:34:12.686 05:24:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.686 05:24:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.686 ************************************ 00:34:12.686 END TEST raid1_resize_data_offset_test 00:34:12.686 ************************************ 00:34:12.687 05:24:59 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:34:12.687 05:24:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:12.687 05:24:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.687 05:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:12.687 ************************************ 00:34:12.687 START TEST raid0_resize_superblock_test 00:34:12.687 ************************************ 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60068 00:34:12.687 Process raid pid: 60068 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60068' 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60068 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60068 ']' 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.687 05:24:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.687 [2024-12-09 05:24:59.480526] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:12.687 [2024-12-09 05:24:59.480702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.945 [2024-12-09 05:24:59.669863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.945 [2024-12-09 05:24:59.791655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.203 [2024-12-09 05:24:59.995844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:13.203 [2024-12-09 05:24:59.995891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:13.771 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.771 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:13.771 05:25:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:34:13.771 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.771 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.031 malloc0 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.031 [2024-12-09 05:25:00.971791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:34:14.031 [2024-12-09 05:25:00.971882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:14.031 [2024-12-09 05:25:00.971912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:14.031 [2024-12-09 05:25:00.971930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:14.031 [2024-12-09 05:25:00.974738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:14.031 [2024-12-09 05:25:00.974821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:34:14.031 pt0 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.031 05:25:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.289 83c7d1c7-21c1-448a-9283-a03f95203e56 00:34:14.289 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.289 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:34:14.289 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.289 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.289 71875a04-89b2-4e5f-b452-a35aca78d381 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.290 7876ac70-a2ac-47f0-92ec-dd9f08c39b03 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.290 [2024-12-09 05:25:01.149865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 71875a04-89b2-4e5f-b452-a35aca78d381 is claimed 00:34:14.290 [2024-12-09 05:25:01.150020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7876ac70-a2ac-47f0-92ec-dd9f08c39b03 is claimed 00:34:14.290 [2024-12-09 05:25:01.150209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:14.290 [2024-12-09 05:25:01.150246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:34:14.290 [2024-12-09 05:25:01.150624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:14.290 [2024-12-09 05:25:01.150982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:14.290 [2024-12-09 05:25:01.151011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:34:14.290 [2024-12-09 05:25:01.151208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:34:14.290 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:34:14.549 [2024-12-09 05:25:01.270177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 [2024-12-09 05:25:01.318088] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:14.549 [2024-12-09 05:25:01.318123] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '71875a04-89b2-4e5f-b452-a35aca78d381' was resized: old size 131072, new size 204800 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 [2024-12-09 05:25:01.326031] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:14.549 [2024-12-09 05:25:01.326062] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7876ac70-a2ac-47f0-92ec-dd9f08c39b03' was resized: old size 131072, new size 204800 00:34:14.549 [2024-12-09 05:25:01.326092] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:34:14.549 [2024-12-09 05:25:01.438212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 [2024-12-09 05:25:01.485965] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:34:14.549 [2024-12-09 05:25:01.486100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:34:14.549 [2024-12-09 05:25:01.486123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:14.549 [2024-12-09 05:25:01.486143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:34:14.549 [2024-12-09 05:25:01.486286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.549 [2024-12-09 05:25:01.486335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.549 [2024-12-09 05:25:01.486356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 [2024-12-09 05:25:01.493861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:34:14.549 [2024-12-09 05:25:01.493941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:14.549 [2024-12-09 05:25:01.494009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:34:14.549 [2024-12-09 05:25:01.494029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:14.549 [2024-12-09 05:25:01.497004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:14.549 [2024-12-09 05:25:01.497065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:34:14.549 pt0 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.549 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.549 [2024-12-09 05:25:01.499410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 71875a04-89b2-4e5f-b452-a35aca78d381 00:34:14.549 [2024-12-09 05:25:01.499495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 71875a04-89b2-4e5f-b452-a35aca78d381 is claimed 00:34:14.549 [2024-12-09 05:25:01.499662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7876ac70-a2ac-47f0-92ec-dd9f08c39b03 00:34:14.549 [2024-12-09 05:25:01.499695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7876ac70-a2ac-47f0-92ec-dd9f08c39b03 is claimed 00:34:14.549 [2024-12-09 05:25:01.499877] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7876ac70-a2ac-47f0-92ec-dd9f08c39b03 (2) smaller than existing raid bdev Raid (3) 00:34:14.549 [2024-12-09 05:25:01.499923] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 71875a04-89b2-4e5f-b452-a35aca78d381: File exists 00:34:14.549 [2024-12-09 05:25:01.499974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:14.549 [2024-12-09 05:25:01.499996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:34:14.550 [2024-12-09 05:25:01.500333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:14.550 [2024-12-09 05:25:01.500554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:14.550 [2024-12-09 05:25:01.500574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:34:14.550 [2024-12-09 05:25:01.500757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.550 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.550 [2024-12-09 05:25:01.514170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60068 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60068 ']' 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60068 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60068 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.808 killing process with pid 60068 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60068' 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60068 00:34:14.808 [2024-12-09 05:25:01.592746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:14.808 05:25:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60068 00:34:14.808 [2024-12-09 05:25:01.592831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.808 [2024-12-09 05:25:01.592887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.808 [2024-12-09 05:25:01.592902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:34:16.182 [2024-12-09 05:25:02.884028] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:17.115 05:25:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:34:17.115 00:34:17.115 real 0m4.688s 00:34:17.115 user 0m4.909s 00:34:17.115 sys 0m0.702s 00:34:17.115 05:25:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.115 ************************************ 00:34:17.115 END TEST raid0_resize_superblock_test 00:34:17.115 ************************************ 00:34:17.115 05:25:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.373 05:25:04 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:34:17.373 05:25:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:17.373 05:25:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.373 05:25:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:17.373 ************************************ 00:34:17.373 START TEST raid1_resize_superblock_test 00:34:17.373 ************************************ 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:34:17.373 Process raid pid: 60171 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60171 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60171' 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60171 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60171 ']' 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.373 05:25:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.373 [2024-12-09 05:25:04.230043] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:17.373 [2024-12-09 05:25:04.230233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.631 [2024-12-09 05:25:04.430078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.888 [2024-12-09 05:25:04.601750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.888 [2024-12-09 05:25:04.830636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:17.888 [2024-12-09 05:25:04.830676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:18.454 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.454 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:18.454 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:34:18.454 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.454 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.020 malloc0 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.020 [2024-12-09 05:25:05.792057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:34:19.020 [2024-12-09 05:25:05.792158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.020 [2024-12-09 05:25:05.792189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:19.020 [2024-12-09 05:25:05.792207] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.020 [2024-12-09 05:25:05.795256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.020 [2024-12-09 05:25:05.795302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:34:19.020 pt0 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.020 65c91d86-4085-42a9-af3f-9a2183238ad9 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.020 0efbaea3-0315-4ba6-ae74-a5598c98ec8e 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.020 a16e2452-5c40-4c89-8183-9fb8b904f7bf 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.020 [2024-12-09 05:25:05.974560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0efbaea3-0315-4ba6-ae74-a5598c98ec8e is claimed 00:34:19.020 [2024-12-09 05:25:05.974690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a16e2452-5c40-4c89-8183-9fb8b904f7bf is claimed 00:34:19.020 [2024-12-09 05:25:05.974897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:19.020 [2024-12-09 05:25:05.974922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:34:19.020 [2024-12-09 05:25:05.975232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:19.020 [2024-12-09 05:25:05.975479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:19.020 [2024-12-09 05:25:05.975496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:34:19.020 [2024-12-09 05:25:05.975670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:34:19.020 05:25:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.278 [2024-12-09 05:25:06.094879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.278 [2024-12-09 05:25:06.146751] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:19.278 [2024-12-09 05:25:06.146949] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0efbaea3-0315-4ba6-ae74-a5598c98ec8e' was resized: old size 131072, new size 204800 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.278 [2024-12-09 05:25:06.154725] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:19.278 [2024-12-09 05:25:06.154751] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a16e2452-5c40-4c89-8183-9fb8b904f7bf' was resized: old size 131072, new size 204800 00:34:19.278 [2024-12-09 05:25:06.154851] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:34:19.278 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:34:19.279 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.279 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.279 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:34:19.537 [2024-12-09 05:25:06.278987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.537 [2024-12-09 05:25:06.326668] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:34:19.537 [2024-12-09 05:25:06.326785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:34:19.537 [2024-12-09 05:25:06.326861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:34:19.537 [2024-12-09 05:25:06.327033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:19.537 [2024-12-09 05:25:06.327338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:19.537 [2024-12-09 05:25:06.327521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:19.537 [2024-12-09 05:25:06.327550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.537 [2024-12-09 05:25:06.334611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:34:19.537 [2024-12-09 05:25:06.334705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.537 [2024-12-09 05:25:06.334732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:34:19.537 [2024-12-09 05:25:06.334753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.537 [2024-12-09 05:25:06.338217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.537 [2024-12-09 05:25:06.338422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:34:19.537 pt0 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.537 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.537 [2024-12-09 05:25:06.341060] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0efbaea3-0315-4ba6-ae74-a5598c98ec8e 00:34:19.537 [2024-12-09 05:25:06.341162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0efbaea3-0315-4ba6-ae74-a5598c98ec8e is claimed 00:34:19.537 [2024-12-09 05:25:06.341292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a16e2452-5c40-4c89-8183-9fb8b904f7bf 00:34:19.537 [2024-12-09 05:25:06.341354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a16e2452-5c40-4c89-8183-9fb8b904f7bf is claimed 00:34:19.537 [2024-12-09 05:25:06.341521] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev a16e2452-5c40-4c89-8183-9fb8b904f7bf (2) smaller than existing raid bdev Raid (3) 00:34:19.537 [2024-12-09 05:25:06.341553] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0efbaea3-0315-4ba6-ae74-a5598c98ec8e: File exists 00:34:19.537 [2024-12-09 05:25:06.341646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:19.537 [2024-12-09 05:25:06.341665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:19.537 [2024-12-09 05:25:06.342040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:19.537 [2024-12-09 05:25:06.342275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:19.537 [2024-12-09 05:25:06.342291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:34:19.537 [2024-12-09 05:25:06.342503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:34:19.538 [2024-12-09 05:25:06.355058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60171 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60171 ']' 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60171 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60171 00:34:19.538 killing process with pid 60171 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60171' 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60171 00:34:19.538 [2024-12-09 05:25:06.441930] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:19.538 [2024-12-09 05:25:06.442019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:19.538 05:25:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60171 00:34:19.538 [2024-12-09 05:25:06.442080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:19.538 [2024-12-09 05:25:06.442095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:34:20.914 [2024-12-09 05:25:07.873076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:22.291 05:25:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:34:22.291 00:34:22.291 real 0m5.082s 00:34:22.291 user 0m5.297s 00:34:22.291 sys 0m0.770s 00:34:22.291 ************************************ 00:34:22.291 END TEST raid1_resize_superblock_test 00:34:22.291 ************************************ 00:34:22.291 05:25:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.291 05:25:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.291 05:25:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:34:22.291 05:25:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:34:22.291 05:25:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:34:22.291 05:25:09 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:34:22.291 05:25:09 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:34:22.548 05:25:09 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:34:22.548 05:25:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:22.548 05:25:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.548 05:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:22.548 ************************************ 00:34:22.548 START TEST raid_function_test_raid0 00:34:22.548 ************************************ 00:34:22.548 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:34:22.548 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:34:22.549 Process raid pid: 60280 00:34:22.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60280 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60280' 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60280 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60280 ']' 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.549 05:25:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:34:22.549 [2024-12-09 05:25:09.387867] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:22.549 [2024-12-09 05:25:09.388322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.807 [2024-12-09 05:25:09.572704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.807 [2024-12-09 05:25:09.706072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.067 [2024-12-09 05:25:09.913114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:23.067 [2024-12-09 05:25:09.913174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:34:23.634 Base_1 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:34:23.634 Base_2 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:34:23.634 [2024-12-09 05:25:10.418030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:23.634 [2024-12-09 05:25:10.420514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:23.634 [2024-12-09 05:25:10.420619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:23.634 [2024-12-09 05:25:10.420638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:23.634 [2024-12-09 05:25:10.421125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:23.634 [2024-12-09 05:25:10.421466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:23.634 [2024-12-09 05:25:10.421595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:34:23.634 [2024-12-09 05:25:10.421828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:23.634 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:23.635 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:34:23.896 [2024-12-09 05:25:10.718222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:23.896 /dev/nbd0 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.896 1+0 records in 00:34:23.896 1+0 records out 00:34:23.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034928 s, 11.7 MB/s 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:34:23.896 05:25:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:24.153 { 00:34:24.153 "nbd_device": "/dev/nbd0", 00:34:24.153 "bdev_name": "raid" 00:34:24.153 } 00:34:24.153 ]' 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:24.153 { 00:34:24.153 "nbd_device": "/dev/nbd0", 00:34:24.153 "bdev_name": "raid" 00:34:24.153 } 00:34:24.153 ]' 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:34:24.153 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:34:24.410 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:34:24.411 4096+0 records in 00:34:24.411 4096+0 records out 00:34:24.411 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0253092 s, 82.9 MB/s 00:34:24.411 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:34:24.669 4096+0 records in 00:34:24.669 4096+0 records out 00:34:24.669 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.346631 s, 6.1 MB/s 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:34:24.669 128+0 records in 00:34:24.669 128+0 records out 00:34:24.669 65536 bytes (66 kB, 64 KiB) copied, 0.00173116 s, 37.9 MB/s 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:34:24.669 2035+0 records in 00:34:24.669 2035+0 records out 00:34:24.669 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0119074 s, 87.5 MB/s 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:34:24.669 456+0 records in 00:34:24.669 456+0 records out 00:34:24.669 233472 bytes (233 kB, 228 KiB) copied, 0.00304977 s, 76.6 MB/s 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:24.669 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:25.235 [2024-12-09 05:25:11.933065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:34:25.235 05:25:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60280 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60280 ']' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60280 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60280 00:34:25.494 killing process with pid 60280 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60280' 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60280 00:34:25.494 [2024-12-09 05:25:12.355372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:25.494 05:25:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60280 00:34:25.494 [2024-12-09 05:25:12.355495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:25.494 [2024-12-09 05:25:12.355560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:25.494 [2024-12-09 05:25:12.355590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:34:25.752 [2024-12-09 05:25:12.547561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:27.127 05:25:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:34:27.127 00:34:27.127 real 0m4.483s 00:34:27.127 user 0m5.402s 00:34:27.127 sys 0m1.042s 00:34:27.127 05:25:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.127 ************************************ 00:34:27.127 END TEST raid_function_test_raid0 00:34:27.127 ************************************ 00:34:27.127 05:25:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:34:27.127 05:25:13 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:34:27.127 05:25:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:27.127 05:25:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.127 05:25:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:27.127 ************************************ 00:34:27.127 START TEST raid_function_test_concat 00:34:27.127 ************************************ 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:34:27.127 Process raid pid: 60409 00:34:27.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60409 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60409' 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60409 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60409 ']' 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.127 05:25:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:34:27.127 [2024-12-09 05:25:13.936296] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:27.127 [2024-12-09 05:25:13.936854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.385 [2024-12-09 05:25:14.125926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.385 [2024-12-09 05:25:14.270111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.643 [2024-12-09 05:25:14.503270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:27.643 [2024-12-09 05:25:14.503325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:34:28.210 Base_1 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.210 05:25:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:34:28.210 Base_2 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:34:28.210 [2024-12-09 05:25:15.028418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:28.210 [2024-12-09 05:25:15.031021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:28.210 [2024-12-09 05:25:15.031135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:28.210 [2024-12-09 05:25:15.031161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:28.210 [2024-12-09 05:25:15.031489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:28.210 [2024-12-09 05:25:15.031711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:28.210 [2024-12-09 05:25:15.031728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:34:28.210 [2024-12-09 05:25:15.031932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:28.210 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:34:28.468 [2024-12-09 05:25:15.380584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:28.468 /dev/nbd0 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:28.468 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:28.726 1+0 records in 00:34:28.726 1+0 records out 00:34:28.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444213 s, 9.2 MB/s 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:34:28.726 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:28.987 { 00:34:28.987 "nbd_device": "/dev/nbd0", 00:34:28.987 "bdev_name": "raid" 00:34:28.987 } 00:34:28.987 ]' 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:28.987 { 00:34:28.987 "nbd_device": "/dev/nbd0", 00:34:28.987 "bdev_name": "raid" 00:34:28.987 } 00:34:28.987 ]' 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:34:28.987 4096+0 records in 00:34:28.987 4096+0 records out 00:34:28.987 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0291103 s, 72.0 MB/s 00:34:28.987 05:25:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:34:29.301 4096+0 records in 00:34:29.301 4096+0 records out 00:34:29.301 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.352091 s, 6.0 MB/s 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:34:29.301 128+0 records in 00:34:29.301 128+0 records out 00:34:29.301 65536 bytes (66 kB, 64 KiB) copied, 0.0011598 s, 56.5 MB/s 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:34:29.301 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:34:29.559 2035+0 records in 00:34:29.559 2035+0 records out 00:34:29.559 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0126057 s, 82.7 MB/s 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:34:29.559 456+0 records in 00:34:29.559 456+0 records out 00:34:29.559 233472 bytes (233 kB, 228 KiB) copied, 0.00248937 s, 93.8 MB/s 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:29.559 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:29.818 [2024-12-09 05:25:16.655155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:34:29.818 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60409 00:34:30.076 05:25:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60409 ']' 00:34:30.077 05:25:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60409 00:34:30.077 05:25:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:34:30.077 05:25:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.077 05:25:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60409 00:34:30.077 05:25:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:30.077 05:25:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:30.077 05:25:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60409' 00:34:30.077 killing process with pid 60409 00:34:30.077 05:25:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60409 00:34:30.077 [2024-12-09 05:25:17.014110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:30.077 05:25:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60409 00:34:30.077 [2024-12-09 05:25:17.014248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:30.077 [2024-12-09 05:25:17.014331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:30.077 [2024-12-09 05:25:17.014352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:34:30.334 [2024-12-09 05:25:17.200648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:31.711 05:25:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:34:31.711 00:34:31.711 real 0m4.583s 00:34:31.711 user 0m5.553s 00:34:31.711 sys 0m1.096s 00:34:31.711 05:25:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.711 05:25:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:34:31.711 ************************************ 00:34:31.711 END TEST raid_function_test_concat 00:34:31.711 ************************************ 00:34:31.711 05:25:18 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:34:31.711 05:25:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:31.711 05:25:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.711 05:25:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:31.711 ************************************ 00:34:31.711 START TEST raid0_resize_test 00:34:31.711 ************************************ 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60543 00:34:31.711 Process raid pid: 60543 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60543' 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60543 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60543 ']' 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.711 05:25:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:31.711 [2024-12-09 05:25:18.579099] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:31.711 [2024-12-09 05:25:18.579300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.969 [2024-12-09 05:25:18.779686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.226 [2024-12-09 05:25:18.947958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.226 [2024-12-09 05:25:19.182694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:32.226 [2024-12-09 05:25:19.182751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.790 Base_1 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.790 Base_2 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.790 [2024-12-09 05:25:19.597917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:32.790 [2024-12-09 05:25:19.600457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:32.790 [2024-12-09 05:25:19.600557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:32.790 [2024-12-09 05:25:19.600577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:32.790 [2024-12-09 05:25:19.600916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:34:32.790 [2024-12-09 05:25:19.601096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:32.790 [2024-12-09 05:25:19.601120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:34:32.790 [2024-12-09 05:25:19.601288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.790 [2024-12-09 05:25:19.605904] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:32.790 [2024-12-09 05:25:19.605949] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:34:32.790 true 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.790 [2024-12-09 05:25:19.618090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:34:32.790 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.791 [2024-12-09 05:25:19.669876] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:32.791 [2024-12-09 05:25:19.669905] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:34:32.791 [2024-12-09 05:25:19.669939] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:34:32.791 true 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.791 [2024-12-09 05:25:19.682108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60543 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60543 ']' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60543 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.791 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60543 00:34:33.050 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:33.050 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:33.050 killing process with pid 60543 00:34:33.050 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60543' 00:34:33.050 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60543 00:34:33.050 [2024-12-09 05:25:19.766543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:33.050 [2024-12-09 05:25:19.766617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:33.050 05:25:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60543 00:34:33.050 [2024-12-09 05:25:19.766672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:33.050 [2024-12-09 05:25:19.766686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:34:33.050 [2024-12-09 05:25:19.782724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:34.422 05:25:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:34:34.422 00:34:34.422 real 0m2.507s 00:34:34.422 user 0m2.713s 00:34:34.422 sys 0m0.456s 00:34:34.422 05:25:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.422 05:25:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:34.422 ************************************ 00:34:34.422 END TEST raid0_resize_test 00:34:34.422 ************************************ 00:34:34.422 05:25:21 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:34:34.422 05:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.422 05:25:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.422 05:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:34.422 ************************************ 00:34:34.422 START TEST raid1_resize_test 00:34:34.422 ************************************ 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60604 00:34:34.422 Process raid pid: 60604 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60604' 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60604 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60604 ']' 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.422 05:25:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:34.422 [2024-12-09 05:25:21.133298] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:34.422 [2024-12-09 05:25:21.133483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.422 [2024-12-09 05:25:21.326497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.680 [2024-12-09 05:25:21.472992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.937 [2024-12-09 05:25:21.699621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:34.937 [2024-12-09 05:25:21.699668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 Base_1 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 Base_2 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 [2024-12-09 05:25:22.216287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:35.518 [2024-12-09 05:25:22.218687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:35.518 [2024-12-09 05:25:22.218782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:35.518 [2024-12-09 05:25:22.218802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:35.518 [2024-12-09 05:25:22.219114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:34:35.518 [2024-12-09 05:25:22.219284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:35.518 [2024-12-09 05:25:22.219300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:34:35.518 [2024-12-09 05:25:22.219465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 [2024-12-09 05:25:22.224269] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:35.518 [2024-12-09 05:25:22.224310] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:34:35.518 true 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 [2024-12-09 05:25:22.236472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 [2024-12-09 05:25:22.288271] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:35.518 [2024-12-09 05:25:22.288303] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:34:35.518 [2024-12-09 05:25:22.288341] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:34:35.518 true 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.518 [2024-12-09 05:25:22.300468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60604 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60604 ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60604 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60604 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.518 killing process with pid 60604 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60604' 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60604 00:34:35.518 [2024-12-09 05:25:22.385905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:35.518 [2024-12-09 05:25:22.386023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:35.518 05:25:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60604 00:34:35.518 [2024-12-09 05:25:22.386647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:35.518 [2024-12-09 05:25:22.386681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:34:35.518 [2024-12-09 05:25:22.402242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:36.892 05:25:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:34:36.892 00:34:36.892 real 0m2.579s 00:34:36.892 user 0m2.879s 00:34:36.892 sys 0m0.424s 00:34:36.892 05:25:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.892 ************************************ 00:34:36.892 END TEST raid1_resize_test 00:34:36.892 ************************************ 00:34:36.892 05:25:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.892 05:25:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:34:36.892 05:25:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:34:36.892 05:25:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:34:36.892 05:25:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:36.892 05:25:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.892 05:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:36.892 ************************************ 00:34:36.892 START TEST raid_state_function_test 00:34:36.892 ************************************ 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:36.892 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60667 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60667' 00:34:36.893 Process raid pid: 60667 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60667 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60667 ']' 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.893 05:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.893 [2024-12-09 05:25:23.785102] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:36.893 [2024-12-09 05:25:23.785348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.150 [2024-12-09 05:25:23.980400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.425 [2024-12-09 05:25:24.122912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.425 [2024-12-09 05:25:24.348511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:37.425 [2024-12-09 05:25:24.348569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:38.030 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.030 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:34:38.030 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:38.030 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.030 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.030 [2024-12-09 05:25:24.714656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:38.030 [2024-12-09 05:25:24.714725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:38.030 [2024-12-09 05:25:24.714742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:38.031 [2024-12-09 05:25:24.714759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.031 "name": "Existed_Raid", 00:34:38.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.031 "strip_size_kb": 64, 00:34:38.031 "state": "configuring", 00:34:38.031 "raid_level": "raid0", 00:34:38.031 "superblock": false, 00:34:38.031 "num_base_bdevs": 2, 00:34:38.031 "num_base_bdevs_discovered": 0, 00:34:38.031 "num_base_bdevs_operational": 2, 00:34:38.031 "base_bdevs_list": [ 00:34:38.031 { 00:34:38.031 "name": "BaseBdev1", 00:34:38.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.031 "is_configured": false, 00:34:38.031 "data_offset": 0, 00:34:38.031 "data_size": 0 00:34:38.031 }, 00:34:38.031 { 00:34:38.031 "name": "BaseBdev2", 00:34:38.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.031 "is_configured": false, 00:34:38.031 "data_offset": 0, 00:34:38.031 "data_size": 0 00:34:38.031 } 00:34:38.031 ] 00:34:38.031 }' 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.031 05:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.290 [2024-12-09 05:25:25.226760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:38.290 [2024-12-09 05:25:25.226848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.290 [2024-12-09 05:25:25.234745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:38.290 [2024-12-09 05:25:25.234819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:38.290 [2024-12-09 05:25:25.234835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:38.290 [2024-12-09 05:25:25.234854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.290 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.548 [2024-12-09 05:25:25.282936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:38.548 BaseBdev1 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.548 [ 00:34:38.548 { 00:34:38.548 "name": "BaseBdev1", 00:34:38.548 "aliases": [ 00:34:38.548 "e8e40d63-eabb-4bc3-9916-f0b14bb8c5e6" 00:34:38.548 ], 00:34:38.548 "product_name": "Malloc disk", 00:34:38.548 "block_size": 512, 00:34:38.548 "num_blocks": 65536, 00:34:38.548 "uuid": "e8e40d63-eabb-4bc3-9916-f0b14bb8c5e6", 00:34:38.548 "assigned_rate_limits": { 00:34:38.548 "rw_ios_per_sec": 0, 00:34:38.548 "rw_mbytes_per_sec": 0, 00:34:38.548 "r_mbytes_per_sec": 0, 00:34:38.548 "w_mbytes_per_sec": 0 00:34:38.548 }, 00:34:38.548 "claimed": true, 00:34:38.548 "claim_type": "exclusive_write", 00:34:38.548 "zoned": false, 00:34:38.548 "supported_io_types": { 00:34:38.548 "read": true, 00:34:38.548 "write": true, 00:34:38.548 "unmap": true, 00:34:38.548 "flush": true, 00:34:38.548 "reset": true, 00:34:38.548 "nvme_admin": false, 00:34:38.548 "nvme_io": false, 00:34:38.548 "nvme_io_md": false, 00:34:38.548 "write_zeroes": true, 00:34:38.548 "zcopy": true, 00:34:38.548 "get_zone_info": false, 00:34:38.548 "zone_management": false, 00:34:38.548 "zone_append": false, 00:34:38.548 "compare": false, 00:34:38.548 "compare_and_write": false, 00:34:38.548 "abort": true, 00:34:38.548 "seek_hole": false, 00:34:38.548 "seek_data": false, 00:34:38.548 "copy": true, 00:34:38.548 "nvme_iov_md": false 00:34:38.548 }, 00:34:38.548 "memory_domains": [ 00:34:38.548 { 00:34:38.548 "dma_device_id": "system", 00:34:38.548 "dma_device_type": 1 00:34:38.548 }, 00:34:38.548 { 00:34:38.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.548 "dma_device_type": 2 00:34:38.548 } 00:34:38.548 ], 00:34:38.548 "driver_specific": {} 00:34:38.548 } 00:34:38.548 ] 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.548 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.548 "name": "Existed_Raid", 00:34:38.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.548 "strip_size_kb": 64, 00:34:38.548 "state": "configuring", 00:34:38.549 "raid_level": "raid0", 00:34:38.549 "superblock": false, 00:34:38.549 "num_base_bdevs": 2, 00:34:38.549 "num_base_bdevs_discovered": 1, 00:34:38.549 "num_base_bdevs_operational": 2, 00:34:38.549 "base_bdevs_list": [ 00:34:38.549 { 00:34:38.549 "name": "BaseBdev1", 00:34:38.549 "uuid": "e8e40d63-eabb-4bc3-9916-f0b14bb8c5e6", 00:34:38.549 "is_configured": true, 00:34:38.549 "data_offset": 0, 00:34:38.549 "data_size": 65536 00:34:38.549 }, 00:34:38.549 { 00:34:38.549 "name": "BaseBdev2", 00:34:38.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.549 "is_configured": false, 00:34:38.549 "data_offset": 0, 00:34:38.549 "data_size": 0 00:34:38.549 } 00:34:38.549 ] 00:34:38.549 }' 00:34:38.549 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.549 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.116 [2024-12-09 05:25:25.879132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:39.116 [2024-12-09 05:25:25.879240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.116 [2024-12-09 05:25:25.887174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:39.116 [2024-12-09 05:25:25.889684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:39.116 [2024-12-09 05:25:25.889750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:39.116 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.117 "name": "Existed_Raid", 00:34:39.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.117 "strip_size_kb": 64, 00:34:39.117 "state": "configuring", 00:34:39.117 "raid_level": "raid0", 00:34:39.117 "superblock": false, 00:34:39.117 "num_base_bdevs": 2, 00:34:39.117 "num_base_bdevs_discovered": 1, 00:34:39.117 "num_base_bdevs_operational": 2, 00:34:39.117 "base_bdevs_list": [ 00:34:39.117 { 00:34:39.117 "name": "BaseBdev1", 00:34:39.117 "uuid": "e8e40d63-eabb-4bc3-9916-f0b14bb8c5e6", 00:34:39.117 "is_configured": true, 00:34:39.117 "data_offset": 0, 00:34:39.117 "data_size": 65536 00:34:39.117 }, 00:34:39.117 { 00:34:39.117 "name": "BaseBdev2", 00:34:39.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.117 "is_configured": false, 00:34:39.117 "data_offset": 0, 00:34:39.117 "data_size": 0 00:34:39.117 } 00:34:39.117 ] 00:34:39.117 }' 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.117 05:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.684 [2024-12-09 05:25:26.480908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:39.684 [2024-12-09 05:25:26.480978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:39.684 [2024-12-09 05:25:26.480993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:39.684 [2024-12-09 05:25:26.481371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:39.684 [2024-12-09 05:25:26.481619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:39.684 [2024-12-09 05:25:26.481661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:39.684 [2024-12-09 05:25:26.482045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:39.684 BaseBdev2 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.684 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.684 [ 00:34:39.684 { 00:34:39.684 "name": "BaseBdev2", 00:34:39.684 "aliases": [ 00:34:39.684 "149bd9f5-dcaf-46b3-8bf4-1699b51b2b46" 00:34:39.684 ], 00:34:39.684 "product_name": "Malloc disk", 00:34:39.684 "block_size": 512, 00:34:39.684 "num_blocks": 65536, 00:34:39.684 "uuid": "149bd9f5-dcaf-46b3-8bf4-1699b51b2b46", 00:34:39.684 "assigned_rate_limits": { 00:34:39.684 "rw_ios_per_sec": 0, 00:34:39.684 "rw_mbytes_per_sec": 0, 00:34:39.684 "r_mbytes_per_sec": 0, 00:34:39.684 "w_mbytes_per_sec": 0 00:34:39.684 }, 00:34:39.684 "claimed": true, 00:34:39.684 "claim_type": "exclusive_write", 00:34:39.684 "zoned": false, 00:34:39.684 "supported_io_types": { 00:34:39.684 "read": true, 00:34:39.685 "write": true, 00:34:39.685 "unmap": true, 00:34:39.685 "flush": true, 00:34:39.685 "reset": true, 00:34:39.685 "nvme_admin": false, 00:34:39.685 "nvme_io": false, 00:34:39.685 "nvme_io_md": false, 00:34:39.685 "write_zeroes": true, 00:34:39.685 "zcopy": true, 00:34:39.685 "get_zone_info": false, 00:34:39.685 "zone_management": false, 00:34:39.685 "zone_append": false, 00:34:39.685 "compare": false, 00:34:39.685 "compare_and_write": false, 00:34:39.685 "abort": true, 00:34:39.685 "seek_hole": false, 00:34:39.685 "seek_data": false, 00:34:39.685 "copy": true, 00:34:39.685 "nvme_iov_md": false 00:34:39.685 }, 00:34:39.685 "memory_domains": [ 00:34:39.685 { 00:34:39.685 "dma_device_id": "system", 00:34:39.685 "dma_device_type": 1 00:34:39.685 }, 00:34:39.685 { 00:34:39.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.685 "dma_device_type": 2 00:34:39.685 } 00:34:39.685 ], 00:34:39.685 "driver_specific": {} 00:34:39.685 } 00:34:39.685 ] 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.685 "name": "Existed_Raid", 00:34:39.685 "uuid": "bfa15db4-14a2-4850-9adf-faef7a70505a", 00:34:39.685 "strip_size_kb": 64, 00:34:39.685 "state": "online", 00:34:39.685 "raid_level": "raid0", 00:34:39.685 "superblock": false, 00:34:39.685 "num_base_bdevs": 2, 00:34:39.685 "num_base_bdevs_discovered": 2, 00:34:39.685 "num_base_bdevs_operational": 2, 00:34:39.685 "base_bdevs_list": [ 00:34:39.685 { 00:34:39.685 "name": "BaseBdev1", 00:34:39.685 "uuid": "e8e40d63-eabb-4bc3-9916-f0b14bb8c5e6", 00:34:39.685 "is_configured": true, 00:34:39.685 "data_offset": 0, 00:34:39.685 "data_size": 65536 00:34:39.685 }, 00:34:39.685 { 00:34:39.685 "name": "BaseBdev2", 00:34:39.685 "uuid": "149bd9f5-dcaf-46b3-8bf4-1699b51b2b46", 00:34:39.685 "is_configured": true, 00:34:39.685 "data_offset": 0, 00:34:39.685 "data_size": 65536 00:34:39.685 } 00:34:39.685 ] 00:34:39.685 }' 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.685 05:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.252 [2024-12-09 05:25:27.041491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:40.252 "name": "Existed_Raid", 00:34:40.252 "aliases": [ 00:34:40.252 "bfa15db4-14a2-4850-9adf-faef7a70505a" 00:34:40.252 ], 00:34:40.252 "product_name": "Raid Volume", 00:34:40.252 "block_size": 512, 00:34:40.252 "num_blocks": 131072, 00:34:40.252 "uuid": "bfa15db4-14a2-4850-9adf-faef7a70505a", 00:34:40.252 "assigned_rate_limits": { 00:34:40.252 "rw_ios_per_sec": 0, 00:34:40.252 "rw_mbytes_per_sec": 0, 00:34:40.252 "r_mbytes_per_sec": 0, 00:34:40.252 "w_mbytes_per_sec": 0 00:34:40.252 }, 00:34:40.252 "claimed": false, 00:34:40.252 "zoned": false, 00:34:40.252 "supported_io_types": { 00:34:40.252 "read": true, 00:34:40.252 "write": true, 00:34:40.252 "unmap": true, 00:34:40.252 "flush": true, 00:34:40.252 "reset": true, 00:34:40.252 "nvme_admin": false, 00:34:40.252 "nvme_io": false, 00:34:40.252 "nvme_io_md": false, 00:34:40.252 "write_zeroes": true, 00:34:40.252 "zcopy": false, 00:34:40.252 "get_zone_info": false, 00:34:40.252 "zone_management": false, 00:34:40.252 "zone_append": false, 00:34:40.252 "compare": false, 00:34:40.252 "compare_and_write": false, 00:34:40.252 "abort": false, 00:34:40.252 "seek_hole": false, 00:34:40.252 "seek_data": false, 00:34:40.252 "copy": false, 00:34:40.252 "nvme_iov_md": false 00:34:40.252 }, 00:34:40.252 "memory_domains": [ 00:34:40.252 { 00:34:40.252 "dma_device_id": "system", 00:34:40.252 "dma_device_type": 1 00:34:40.252 }, 00:34:40.252 { 00:34:40.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.252 "dma_device_type": 2 00:34:40.252 }, 00:34:40.252 { 00:34:40.252 "dma_device_id": "system", 00:34:40.252 "dma_device_type": 1 00:34:40.252 }, 00:34:40.252 { 00:34:40.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.252 "dma_device_type": 2 00:34:40.252 } 00:34:40.252 ], 00:34:40.252 "driver_specific": { 00:34:40.252 "raid": { 00:34:40.252 "uuid": "bfa15db4-14a2-4850-9adf-faef7a70505a", 00:34:40.252 "strip_size_kb": 64, 00:34:40.252 "state": "online", 00:34:40.252 "raid_level": "raid0", 00:34:40.252 "superblock": false, 00:34:40.252 "num_base_bdevs": 2, 00:34:40.252 "num_base_bdevs_discovered": 2, 00:34:40.252 "num_base_bdevs_operational": 2, 00:34:40.252 "base_bdevs_list": [ 00:34:40.252 { 00:34:40.252 "name": "BaseBdev1", 00:34:40.252 "uuid": "e8e40d63-eabb-4bc3-9916-f0b14bb8c5e6", 00:34:40.252 "is_configured": true, 00:34:40.252 "data_offset": 0, 00:34:40.252 "data_size": 65536 00:34:40.252 }, 00:34:40.252 { 00:34:40.252 "name": "BaseBdev2", 00:34:40.252 "uuid": "149bd9f5-dcaf-46b3-8bf4-1699b51b2b46", 00:34:40.252 "is_configured": true, 00:34:40.252 "data_offset": 0, 00:34:40.252 "data_size": 65536 00:34:40.252 } 00:34:40.252 ] 00:34:40.252 } 00:34:40.252 } 00:34:40.252 }' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:40.252 BaseBdev2' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.252 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.511 [2024-12-09 05:25:27.313275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:40.511 [2024-12-09 05:25:27.313337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:40.511 [2024-12-09 05:25:27.313403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:40.511 "name": "Existed_Raid", 00:34:40.511 "uuid": "bfa15db4-14a2-4850-9adf-faef7a70505a", 00:34:40.511 "strip_size_kb": 64, 00:34:40.511 "state": "offline", 00:34:40.511 "raid_level": "raid0", 00:34:40.511 "superblock": false, 00:34:40.511 "num_base_bdevs": 2, 00:34:40.511 "num_base_bdevs_discovered": 1, 00:34:40.511 "num_base_bdevs_operational": 1, 00:34:40.511 "base_bdevs_list": [ 00:34:40.511 { 00:34:40.511 "name": null, 00:34:40.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.511 "is_configured": false, 00:34:40.511 "data_offset": 0, 00:34:40.511 "data_size": 65536 00:34:40.511 }, 00:34:40.511 { 00:34:40.511 "name": "BaseBdev2", 00:34:40.511 "uuid": "149bd9f5-dcaf-46b3-8bf4-1699b51b2b46", 00:34:40.511 "is_configured": true, 00:34:40.511 "data_offset": 0, 00:34:40.511 "data_size": 65536 00:34:40.511 } 00:34:40.511 ] 00:34:40.511 }' 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:40.511 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.078 05:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.078 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:41.078 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:41.078 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:41.078 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.078 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.078 [2024-12-09 05:25:28.011210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:41.078 [2024-12-09 05:25:28.011328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60667 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60667 ']' 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60667 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60667 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.337 killing process with pid 60667 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60667' 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60667 00:34:41.337 [2024-12-09 05:25:28.188281] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:41.337 05:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60667 00:34:41.337 [2024-12-09 05:25:28.203985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:34:42.715 00:34:42.715 real 0m5.739s 00:34:42.715 user 0m8.606s 00:34:42.715 sys 0m0.841s 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.715 ************************************ 00:34:42.715 END TEST raid_state_function_test 00:34:42.715 ************************************ 00:34:42.715 05:25:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:34:42.715 05:25:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:42.715 05:25:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:42.715 05:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:42.715 ************************************ 00:34:42.715 START TEST raid_state_function_test_sb 00:34:42.715 ************************************ 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60925 00:34:42.715 Process raid pid: 60925 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60925' 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60925 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60925 ']' 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.715 05:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:42.715 [2024-12-09 05:25:29.580111] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:42.715 [2024-12-09 05:25:29.580318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.973 [2024-12-09 05:25:29.770057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.973 [2024-12-09 05:25:29.917364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.231 [2024-12-09 05:25:30.141637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:43.231 [2024-12-09 05:25:30.141702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:43.808 [2024-12-09 05:25:30.599830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:43.808 [2024-12-09 05:25:30.599909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:43.808 [2024-12-09 05:25:30.599926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:43.808 [2024-12-09 05:25:30.599941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:43.808 "name": "Existed_Raid", 00:34:43.808 "uuid": "ec52fc1b-209f-41f9-93be-eb6aa47580e5", 00:34:43.808 "strip_size_kb": 64, 00:34:43.808 "state": "configuring", 00:34:43.808 "raid_level": "raid0", 00:34:43.808 "superblock": true, 00:34:43.808 "num_base_bdevs": 2, 00:34:43.808 "num_base_bdevs_discovered": 0, 00:34:43.808 "num_base_bdevs_operational": 2, 00:34:43.808 "base_bdevs_list": [ 00:34:43.808 { 00:34:43.808 "name": "BaseBdev1", 00:34:43.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.808 "is_configured": false, 00:34:43.808 "data_offset": 0, 00:34:43.808 "data_size": 0 00:34:43.808 }, 00:34:43.808 { 00:34:43.808 "name": "BaseBdev2", 00:34:43.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.808 "is_configured": false, 00:34:43.808 "data_offset": 0, 00:34:43.808 "data_size": 0 00:34:43.808 } 00:34:43.808 ] 00:34:43.808 }' 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:43.808 05:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.404 [2024-12-09 05:25:31.143974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:44.404 [2024-12-09 05:25:31.144021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.404 [2024-12-09 05:25:31.151924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:44.404 [2024-12-09 05:25:31.151976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:44.404 [2024-12-09 05:25:31.151991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:44.404 [2024-12-09 05:25:31.152009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.404 [2024-12-09 05:25:31.200915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:44.404 BaseBdev1 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.404 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.404 [ 00:34:44.404 { 00:34:44.404 "name": "BaseBdev1", 00:34:44.404 "aliases": [ 00:34:44.404 "2bf18054-a644-459f-a394-7cb16948aa6c" 00:34:44.404 ], 00:34:44.404 "product_name": "Malloc disk", 00:34:44.404 "block_size": 512, 00:34:44.404 "num_blocks": 65536, 00:34:44.404 "uuid": "2bf18054-a644-459f-a394-7cb16948aa6c", 00:34:44.404 "assigned_rate_limits": { 00:34:44.404 "rw_ios_per_sec": 0, 00:34:44.404 "rw_mbytes_per_sec": 0, 00:34:44.404 "r_mbytes_per_sec": 0, 00:34:44.404 "w_mbytes_per_sec": 0 00:34:44.404 }, 00:34:44.404 "claimed": true, 00:34:44.404 "claim_type": "exclusive_write", 00:34:44.404 "zoned": false, 00:34:44.404 "supported_io_types": { 00:34:44.404 "read": true, 00:34:44.404 "write": true, 00:34:44.404 "unmap": true, 00:34:44.404 "flush": true, 00:34:44.404 "reset": true, 00:34:44.404 "nvme_admin": false, 00:34:44.404 "nvme_io": false, 00:34:44.404 "nvme_io_md": false, 00:34:44.404 "write_zeroes": true, 00:34:44.404 "zcopy": true, 00:34:44.405 "get_zone_info": false, 00:34:44.405 "zone_management": false, 00:34:44.405 "zone_append": false, 00:34:44.405 "compare": false, 00:34:44.405 "compare_and_write": false, 00:34:44.405 "abort": true, 00:34:44.405 "seek_hole": false, 00:34:44.405 "seek_data": false, 00:34:44.405 "copy": true, 00:34:44.405 "nvme_iov_md": false 00:34:44.405 }, 00:34:44.405 "memory_domains": [ 00:34:44.405 { 00:34:44.405 "dma_device_id": "system", 00:34:44.405 "dma_device_type": 1 00:34:44.405 }, 00:34:44.405 { 00:34:44.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:44.405 "dma_device_type": 2 00:34:44.405 } 00:34:44.405 ], 00:34:44.405 "driver_specific": {} 00:34:44.405 } 00:34:44.405 ] 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:44.405 "name": "Existed_Raid", 00:34:44.405 "uuid": "cd7b25fd-10a5-436d-8122-5166f6cbf7b9", 00:34:44.405 "strip_size_kb": 64, 00:34:44.405 "state": "configuring", 00:34:44.405 "raid_level": "raid0", 00:34:44.405 "superblock": true, 00:34:44.405 "num_base_bdevs": 2, 00:34:44.405 "num_base_bdevs_discovered": 1, 00:34:44.405 "num_base_bdevs_operational": 2, 00:34:44.405 "base_bdevs_list": [ 00:34:44.405 { 00:34:44.405 "name": "BaseBdev1", 00:34:44.405 "uuid": "2bf18054-a644-459f-a394-7cb16948aa6c", 00:34:44.405 "is_configured": true, 00:34:44.405 "data_offset": 2048, 00:34:44.405 "data_size": 63488 00:34:44.405 }, 00:34:44.405 { 00:34:44.405 "name": "BaseBdev2", 00:34:44.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:44.405 "is_configured": false, 00:34:44.405 "data_offset": 0, 00:34:44.405 "data_size": 0 00:34:44.405 } 00:34:44.405 ] 00:34:44.405 }' 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:44.405 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.970 [2024-12-09 05:25:31.765226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:44.970 [2024-12-09 05:25:31.765310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.970 [2024-12-09 05:25:31.773237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:44.970 [2024-12-09 05:25:31.775897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:44.970 [2024-12-09 05:25:31.775952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:44.970 "name": "Existed_Raid", 00:34:44.970 "uuid": "8e01754a-599e-4dd6-8b87-5821b70b7382", 00:34:44.970 "strip_size_kb": 64, 00:34:44.970 "state": "configuring", 00:34:44.970 "raid_level": "raid0", 00:34:44.970 "superblock": true, 00:34:44.970 "num_base_bdevs": 2, 00:34:44.970 "num_base_bdevs_discovered": 1, 00:34:44.970 "num_base_bdevs_operational": 2, 00:34:44.970 "base_bdevs_list": [ 00:34:44.970 { 00:34:44.970 "name": "BaseBdev1", 00:34:44.970 "uuid": "2bf18054-a644-459f-a394-7cb16948aa6c", 00:34:44.970 "is_configured": true, 00:34:44.970 "data_offset": 2048, 00:34:44.970 "data_size": 63488 00:34:44.970 }, 00:34:44.970 { 00:34:44.970 "name": "BaseBdev2", 00:34:44.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:44.970 "is_configured": false, 00:34:44.970 "data_offset": 0, 00:34:44.970 "data_size": 0 00:34:44.970 } 00:34:44.970 ] 00:34:44.970 }' 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:44.970 05:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.550 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:45.550 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.550 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.550 [2024-12-09 05:25:32.346325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:45.550 [2024-12-09 05:25:32.346737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:45.550 [2024-12-09 05:25:32.346756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:45.550 [2024-12-09 05:25:32.347159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:45.550 [2024-12-09 05:25:32.347378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:45.550 [2024-12-09 05:25:32.347403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:45.550 BaseBdev2 00:34:45.550 [2024-12-09 05:25:32.347599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.550 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.550 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:45.550 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.551 [ 00:34:45.551 { 00:34:45.551 "name": "BaseBdev2", 00:34:45.551 "aliases": [ 00:34:45.551 "9dbeb72a-ebce-4def-b149-cf9189d05b14" 00:34:45.551 ], 00:34:45.551 "product_name": "Malloc disk", 00:34:45.551 "block_size": 512, 00:34:45.551 "num_blocks": 65536, 00:34:45.551 "uuid": "9dbeb72a-ebce-4def-b149-cf9189d05b14", 00:34:45.551 "assigned_rate_limits": { 00:34:45.551 "rw_ios_per_sec": 0, 00:34:45.551 "rw_mbytes_per_sec": 0, 00:34:45.551 "r_mbytes_per_sec": 0, 00:34:45.551 "w_mbytes_per_sec": 0 00:34:45.551 }, 00:34:45.551 "claimed": true, 00:34:45.551 "claim_type": "exclusive_write", 00:34:45.551 "zoned": false, 00:34:45.551 "supported_io_types": { 00:34:45.551 "read": true, 00:34:45.551 "write": true, 00:34:45.551 "unmap": true, 00:34:45.551 "flush": true, 00:34:45.551 "reset": true, 00:34:45.551 "nvme_admin": false, 00:34:45.551 "nvme_io": false, 00:34:45.551 "nvme_io_md": false, 00:34:45.551 "write_zeroes": true, 00:34:45.551 "zcopy": true, 00:34:45.551 "get_zone_info": false, 00:34:45.551 "zone_management": false, 00:34:45.551 "zone_append": false, 00:34:45.551 "compare": false, 00:34:45.551 "compare_and_write": false, 00:34:45.551 "abort": true, 00:34:45.551 "seek_hole": false, 00:34:45.551 "seek_data": false, 00:34:45.551 "copy": true, 00:34:45.551 "nvme_iov_md": false 00:34:45.551 }, 00:34:45.551 "memory_domains": [ 00:34:45.551 { 00:34:45.551 "dma_device_id": "system", 00:34:45.551 "dma_device_type": 1 00:34:45.551 }, 00:34:45.551 { 00:34:45.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.551 "dma_device_type": 2 00:34:45.551 } 00:34:45.551 ], 00:34:45.551 "driver_specific": {} 00:34:45.551 } 00:34:45.551 ] 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.551 "name": "Existed_Raid", 00:34:45.551 "uuid": "8e01754a-599e-4dd6-8b87-5821b70b7382", 00:34:45.551 "strip_size_kb": 64, 00:34:45.551 "state": "online", 00:34:45.551 "raid_level": "raid0", 00:34:45.551 "superblock": true, 00:34:45.551 "num_base_bdevs": 2, 00:34:45.551 "num_base_bdevs_discovered": 2, 00:34:45.551 "num_base_bdevs_operational": 2, 00:34:45.551 "base_bdevs_list": [ 00:34:45.551 { 00:34:45.551 "name": "BaseBdev1", 00:34:45.551 "uuid": "2bf18054-a644-459f-a394-7cb16948aa6c", 00:34:45.551 "is_configured": true, 00:34:45.551 "data_offset": 2048, 00:34:45.551 "data_size": 63488 00:34:45.551 }, 00:34:45.551 { 00:34:45.551 "name": "BaseBdev2", 00:34:45.551 "uuid": "9dbeb72a-ebce-4def-b149-cf9189d05b14", 00:34:45.551 "is_configured": true, 00:34:45.551 "data_offset": 2048, 00:34:45.551 "data_size": 63488 00:34:45.551 } 00:34:45.551 ] 00:34:45.551 }' 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.551 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.115 [2024-12-09 05:25:32.895030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:46.115 "name": "Existed_Raid", 00:34:46.115 "aliases": [ 00:34:46.115 "8e01754a-599e-4dd6-8b87-5821b70b7382" 00:34:46.115 ], 00:34:46.115 "product_name": "Raid Volume", 00:34:46.115 "block_size": 512, 00:34:46.115 "num_blocks": 126976, 00:34:46.115 "uuid": "8e01754a-599e-4dd6-8b87-5821b70b7382", 00:34:46.115 "assigned_rate_limits": { 00:34:46.115 "rw_ios_per_sec": 0, 00:34:46.115 "rw_mbytes_per_sec": 0, 00:34:46.115 "r_mbytes_per_sec": 0, 00:34:46.115 "w_mbytes_per_sec": 0 00:34:46.115 }, 00:34:46.115 "claimed": false, 00:34:46.115 "zoned": false, 00:34:46.115 "supported_io_types": { 00:34:46.115 "read": true, 00:34:46.115 "write": true, 00:34:46.115 "unmap": true, 00:34:46.115 "flush": true, 00:34:46.115 "reset": true, 00:34:46.115 "nvme_admin": false, 00:34:46.115 "nvme_io": false, 00:34:46.115 "nvme_io_md": false, 00:34:46.115 "write_zeroes": true, 00:34:46.115 "zcopy": false, 00:34:46.115 "get_zone_info": false, 00:34:46.115 "zone_management": false, 00:34:46.115 "zone_append": false, 00:34:46.115 "compare": false, 00:34:46.115 "compare_and_write": false, 00:34:46.115 "abort": false, 00:34:46.115 "seek_hole": false, 00:34:46.115 "seek_data": false, 00:34:46.115 "copy": false, 00:34:46.115 "nvme_iov_md": false 00:34:46.115 }, 00:34:46.115 "memory_domains": [ 00:34:46.115 { 00:34:46.115 "dma_device_id": "system", 00:34:46.115 "dma_device_type": 1 00:34:46.115 }, 00:34:46.115 { 00:34:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:46.115 "dma_device_type": 2 00:34:46.115 }, 00:34:46.115 { 00:34:46.115 "dma_device_id": "system", 00:34:46.115 "dma_device_type": 1 00:34:46.115 }, 00:34:46.115 { 00:34:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:46.115 "dma_device_type": 2 00:34:46.115 } 00:34:46.115 ], 00:34:46.115 "driver_specific": { 00:34:46.115 "raid": { 00:34:46.115 "uuid": "8e01754a-599e-4dd6-8b87-5821b70b7382", 00:34:46.115 "strip_size_kb": 64, 00:34:46.115 "state": "online", 00:34:46.115 "raid_level": "raid0", 00:34:46.115 "superblock": true, 00:34:46.115 "num_base_bdevs": 2, 00:34:46.115 "num_base_bdevs_discovered": 2, 00:34:46.115 "num_base_bdevs_operational": 2, 00:34:46.115 "base_bdevs_list": [ 00:34:46.115 { 00:34:46.115 "name": "BaseBdev1", 00:34:46.115 "uuid": "2bf18054-a644-459f-a394-7cb16948aa6c", 00:34:46.115 "is_configured": true, 00:34:46.115 "data_offset": 2048, 00:34:46.115 "data_size": 63488 00:34:46.115 }, 00:34:46.115 { 00:34:46.115 "name": "BaseBdev2", 00:34:46.115 "uuid": "9dbeb72a-ebce-4def-b149-cf9189d05b14", 00:34:46.115 "is_configured": true, 00:34:46.115 "data_offset": 2048, 00:34:46.115 "data_size": 63488 00:34:46.115 } 00:34:46.115 ] 00:34:46.115 } 00:34:46.115 } 00:34:46.115 }' 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:46.115 BaseBdev2' 00:34:46.115 05:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:46.115 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.372 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:46.372 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.373 [2024-12-09 05:25:33.138741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:46.373 [2024-12-09 05:25:33.138803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:46.373 [2024-12-09 05:25:33.138878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:46.373 "name": "Existed_Raid", 00:34:46.373 "uuid": "8e01754a-599e-4dd6-8b87-5821b70b7382", 00:34:46.373 "strip_size_kb": 64, 00:34:46.373 "state": "offline", 00:34:46.373 "raid_level": "raid0", 00:34:46.373 "superblock": true, 00:34:46.373 "num_base_bdevs": 2, 00:34:46.373 "num_base_bdevs_discovered": 1, 00:34:46.373 "num_base_bdevs_operational": 1, 00:34:46.373 "base_bdevs_list": [ 00:34:46.373 { 00:34:46.373 "name": null, 00:34:46.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.373 "is_configured": false, 00:34:46.373 "data_offset": 0, 00:34:46.373 "data_size": 63488 00:34:46.373 }, 00:34:46.373 { 00:34:46.373 "name": "BaseBdev2", 00:34:46.373 "uuid": "9dbeb72a-ebce-4def-b149-cf9189d05b14", 00:34:46.373 "is_configured": true, 00:34:46.373 "data_offset": 2048, 00:34:46.373 "data_size": 63488 00:34:46.373 } 00:34:46.373 ] 00:34:46.373 }' 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:46.373 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.938 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.938 [2024-12-09 05:25:33.847214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:46.938 [2024-12-09 05:25:33.847287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.196 05:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60925 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60925 ']' 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60925 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60925 00:34:47.196 killing process with pid 60925 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60925' 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60925 00:34:47.196 05:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60925 00:34:47.196 [2024-12-09 05:25:34.043509] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:47.196 [2024-12-09 05:25:34.061126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:49.097 05:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:34:49.097 00:34:49.097 real 0m6.110s 00:34:49.097 user 0m8.918s 00:34:49.097 sys 0m0.906s 00:34:49.097 05:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.097 ************************************ 00:34:49.097 END TEST raid_state_function_test_sb 00:34:49.097 ************************************ 00:34:49.097 05:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.097 05:25:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:34:49.097 05:25:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:49.097 05:25:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.097 05:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:49.097 ************************************ 00:34:49.097 START TEST raid_superblock_test 00:34:49.097 ************************************ 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61188 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61188 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61188 ']' 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.097 05:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.097 [2024-12-09 05:25:35.724362] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:49.097 [2024-12-09 05:25:35.724528] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61188 ] 00:34:49.097 [2024-12-09 05:25:35.899678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.097 [2024-12-09 05:25:36.038971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.356 [2024-12-09 05:25:36.258047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:49.356 [2024-12-09 05:25:36.258430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.924 malloc1 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.924 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.925 [2024-12-09 05:25:36.774858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:49.925 [2024-12-09 05:25:36.775111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.925 [2024-12-09 05:25:36.775202] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:49.925 [2024-12-09 05:25:36.775374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.925 [2024-12-09 05:25:36.778330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.925 [2024-12-09 05:25:36.778515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:49.925 pt1 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.925 malloc2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.925 [2024-12-09 05:25:36.827829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:49.925 [2024-12-09 05:25:36.827904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.925 [2024-12-09 05:25:36.827940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:49.925 [2024-12-09 05:25:36.827954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.925 [2024-12-09 05:25:36.830712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.925 [2024-12-09 05:25:36.830966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:49.925 pt2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.925 [2024-12-09 05:25:36.835896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:49.925 [2024-12-09 05:25:36.838482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:49.925 [2024-12-09 05:25:36.838663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:49.925 [2024-12-09 05:25:36.838680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:49.925 [2024-12-09 05:25:36.838998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:49.925 [2024-12-09 05:25:36.839249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:49.925 [2024-12-09 05:25:36.839276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:49.925 [2024-12-09 05:25:36.839452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.925 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.183 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.183 "name": "raid_bdev1", 00:34:50.183 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:50.183 "strip_size_kb": 64, 00:34:50.183 "state": "online", 00:34:50.183 "raid_level": "raid0", 00:34:50.183 "superblock": true, 00:34:50.183 "num_base_bdevs": 2, 00:34:50.183 "num_base_bdevs_discovered": 2, 00:34:50.183 "num_base_bdevs_operational": 2, 00:34:50.183 "base_bdevs_list": [ 00:34:50.183 { 00:34:50.183 "name": "pt1", 00:34:50.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.183 "is_configured": true, 00:34:50.183 "data_offset": 2048, 00:34:50.183 "data_size": 63488 00:34:50.183 }, 00:34:50.183 { 00:34:50.183 "name": "pt2", 00:34:50.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.183 "is_configured": true, 00:34:50.183 "data_offset": 2048, 00:34:50.183 "data_size": 63488 00:34:50.183 } 00:34:50.183 ] 00:34:50.183 }' 00:34:50.183 05:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.183 05:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.442 [2024-12-09 05:25:37.352406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.442 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:50.442 "name": "raid_bdev1", 00:34:50.442 "aliases": [ 00:34:50.442 "1653f3a2-ea5a-4fd3-aa11-3914feb2433c" 00:34:50.442 ], 00:34:50.442 "product_name": "Raid Volume", 00:34:50.442 "block_size": 512, 00:34:50.442 "num_blocks": 126976, 00:34:50.442 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:50.442 "assigned_rate_limits": { 00:34:50.442 "rw_ios_per_sec": 0, 00:34:50.442 "rw_mbytes_per_sec": 0, 00:34:50.442 "r_mbytes_per_sec": 0, 00:34:50.442 "w_mbytes_per_sec": 0 00:34:50.442 }, 00:34:50.442 "claimed": false, 00:34:50.442 "zoned": false, 00:34:50.442 "supported_io_types": { 00:34:50.442 "read": true, 00:34:50.442 "write": true, 00:34:50.442 "unmap": true, 00:34:50.442 "flush": true, 00:34:50.442 "reset": true, 00:34:50.442 "nvme_admin": false, 00:34:50.442 "nvme_io": false, 00:34:50.442 "nvme_io_md": false, 00:34:50.442 "write_zeroes": true, 00:34:50.442 "zcopy": false, 00:34:50.442 "get_zone_info": false, 00:34:50.442 "zone_management": false, 00:34:50.442 "zone_append": false, 00:34:50.442 "compare": false, 00:34:50.443 "compare_and_write": false, 00:34:50.443 "abort": false, 00:34:50.443 "seek_hole": false, 00:34:50.443 "seek_data": false, 00:34:50.443 "copy": false, 00:34:50.443 "nvme_iov_md": false 00:34:50.443 }, 00:34:50.443 "memory_domains": [ 00:34:50.443 { 00:34:50.443 "dma_device_id": "system", 00:34:50.443 "dma_device_type": 1 00:34:50.443 }, 00:34:50.443 { 00:34:50.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.443 "dma_device_type": 2 00:34:50.443 }, 00:34:50.443 { 00:34:50.443 "dma_device_id": "system", 00:34:50.443 "dma_device_type": 1 00:34:50.443 }, 00:34:50.443 { 00:34:50.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.443 "dma_device_type": 2 00:34:50.443 } 00:34:50.443 ], 00:34:50.443 "driver_specific": { 00:34:50.443 "raid": { 00:34:50.443 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:50.443 "strip_size_kb": 64, 00:34:50.443 "state": "online", 00:34:50.443 "raid_level": "raid0", 00:34:50.443 "superblock": true, 00:34:50.443 "num_base_bdevs": 2, 00:34:50.443 "num_base_bdevs_discovered": 2, 00:34:50.443 "num_base_bdevs_operational": 2, 00:34:50.443 "base_bdevs_list": [ 00:34:50.443 { 00:34:50.443 "name": "pt1", 00:34:50.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.443 "is_configured": true, 00:34:50.443 "data_offset": 2048, 00:34:50.443 "data_size": 63488 00:34:50.443 }, 00:34:50.443 { 00:34:50.443 "name": "pt2", 00:34:50.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.443 "is_configured": true, 00:34:50.443 "data_offset": 2048, 00:34:50.443 "data_size": 63488 00:34:50.443 } 00:34:50.443 ] 00:34:50.443 } 00:34:50.443 } 00:34:50.443 }' 00:34:50.443 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:50.702 pt2' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.702 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.703 [2024-12-09 05:25:37.604472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1653f3a2-ea5a-4fd3-aa11-3914feb2433c 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1653f3a2-ea5a-4fd3-aa11-3914feb2433c ']' 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.703 [2024-12-09 05:25:37.656141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:50.703 [2024-12-09 05:25:37.656184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:50.703 [2024-12-09 05:25:37.656296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:50.703 [2024-12-09 05:25:37.656357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:50.703 [2024-12-09 05:25:37.656375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.703 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.978 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 [2024-12-09 05:25:37.796224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:50.978 [2024-12-09 05:25:37.798845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:50.978 [2024-12-09 05:25:37.798960] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:50.978 [2024-12-09 05:25:37.799027] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:50.978 [2024-12-09 05:25:37.799051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:50.979 [2024-12-09 05:25:37.799068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:50.979 request: 00:34:50.979 { 00:34:50.979 "name": "raid_bdev1", 00:34:50.979 "raid_level": "raid0", 00:34:50.979 "base_bdevs": [ 00:34:50.979 "malloc1", 00:34:50.979 "malloc2" 00:34:50.979 ], 00:34:50.979 "strip_size_kb": 64, 00:34:50.979 "superblock": false, 00:34:50.979 "method": "bdev_raid_create", 00:34:50.979 "req_id": 1 00:34:50.979 } 00:34:50.979 Got JSON-RPC error response 00:34:50.979 response: 00:34:50.979 { 00:34:50.979 "code": -17, 00:34:50.979 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:50.979 } 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.979 [2024-12-09 05:25:37.860196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:50.979 [2024-12-09 05:25:37.860281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.979 [2024-12-09 05:25:37.860306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:50.979 [2024-12-09 05:25:37.860323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.979 [2024-12-09 05:25:37.863333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.979 [2024-12-09 05:25:37.863405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:50.979 [2024-12-09 05:25:37.863502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:50.979 [2024-12-09 05:25:37.863562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:50.979 pt1 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.979 "name": "raid_bdev1", 00:34:50.979 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:50.979 "strip_size_kb": 64, 00:34:50.979 "state": "configuring", 00:34:50.979 "raid_level": "raid0", 00:34:50.979 "superblock": true, 00:34:50.979 "num_base_bdevs": 2, 00:34:50.979 "num_base_bdevs_discovered": 1, 00:34:50.979 "num_base_bdevs_operational": 2, 00:34:50.979 "base_bdevs_list": [ 00:34:50.979 { 00:34:50.979 "name": "pt1", 00:34:50.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.979 "is_configured": true, 00:34:50.979 "data_offset": 2048, 00:34:50.979 "data_size": 63488 00:34:50.979 }, 00:34:50.979 { 00:34:50.979 "name": null, 00:34:50.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.979 "is_configured": false, 00:34:50.979 "data_offset": 2048, 00:34:50.979 "data_size": 63488 00:34:50.979 } 00:34:50.979 ] 00:34:50.979 }' 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.979 05:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.551 [2024-12-09 05:25:38.392608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:51.551 [2024-12-09 05:25:38.392716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.551 [2024-12-09 05:25:38.392773] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:51.551 [2024-12-09 05:25:38.392794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.551 [2024-12-09 05:25:38.393426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.551 [2024-12-09 05:25:38.393473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:51.551 [2024-12-09 05:25:38.393592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:51.551 [2024-12-09 05:25:38.393649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:51.551 [2024-12-09 05:25:38.393816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:51.551 [2024-12-09 05:25:38.393852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:51.551 [2024-12-09 05:25:38.394199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:51.551 [2024-12-09 05:25:38.394394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:51.551 [2024-12-09 05:25:38.394415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:51.551 [2024-12-09 05:25:38.394586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.551 pt2 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.551 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:51.551 "name": "raid_bdev1", 00:34:51.551 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:51.551 "strip_size_kb": 64, 00:34:51.551 "state": "online", 00:34:51.551 "raid_level": "raid0", 00:34:51.551 "superblock": true, 00:34:51.551 "num_base_bdevs": 2, 00:34:51.551 "num_base_bdevs_discovered": 2, 00:34:51.551 "num_base_bdevs_operational": 2, 00:34:51.551 "base_bdevs_list": [ 00:34:51.551 { 00:34:51.551 "name": "pt1", 00:34:51.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:51.551 "is_configured": true, 00:34:51.551 "data_offset": 2048, 00:34:51.551 "data_size": 63488 00:34:51.551 }, 00:34:51.551 { 00:34:51.551 "name": "pt2", 00:34:51.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:51.551 "is_configured": true, 00:34:51.551 "data_offset": 2048, 00:34:51.551 "data_size": 63488 00:34:51.551 } 00:34:51.551 ] 00:34:51.552 }' 00:34:51.552 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:51.552 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.118 [2024-12-09 05:25:38.929254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.118 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:52.118 "name": "raid_bdev1", 00:34:52.118 "aliases": [ 00:34:52.118 "1653f3a2-ea5a-4fd3-aa11-3914feb2433c" 00:34:52.118 ], 00:34:52.118 "product_name": "Raid Volume", 00:34:52.118 "block_size": 512, 00:34:52.118 "num_blocks": 126976, 00:34:52.118 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:52.118 "assigned_rate_limits": { 00:34:52.118 "rw_ios_per_sec": 0, 00:34:52.118 "rw_mbytes_per_sec": 0, 00:34:52.118 "r_mbytes_per_sec": 0, 00:34:52.118 "w_mbytes_per_sec": 0 00:34:52.118 }, 00:34:52.118 "claimed": false, 00:34:52.118 "zoned": false, 00:34:52.118 "supported_io_types": { 00:34:52.118 "read": true, 00:34:52.118 "write": true, 00:34:52.118 "unmap": true, 00:34:52.118 "flush": true, 00:34:52.118 "reset": true, 00:34:52.118 "nvme_admin": false, 00:34:52.118 "nvme_io": false, 00:34:52.118 "nvme_io_md": false, 00:34:52.118 "write_zeroes": true, 00:34:52.118 "zcopy": false, 00:34:52.118 "get_zone_info": false, 00:34:52.118 "zone_management": false, 00:34:52.118 "zone_append": false, 00:34:52.118 "compare": false, 00:34:52.118 "compare_and_write": false, 00:34:52.118 "abort": false, 00:34:52.118 "seek_hole": false, 00:34:52.118 "seek_data": false, 00:34:52.118 "copy": false, 00:34:52.118 "nvme_iov_md": false 00:34:52.118 }, 00:34:52.118 "memory_domains": [ 00:34:52.118 { 00:34:52.118 "dma_device_id": "system", 00:34:52.118 "dma_device_type": 1 00:34:52.118 }, 00:34:52.118 { 00:34:52.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.118 "dma_device_type": 2 00:34:52.118 }, 00:34:52.119 { 00:34:52.119 "dma_device_id": "system", 00:34:52.119 "dma_device_type": 1 00:34:52.119 }, 00:34:52.119 { 00:34:52.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.119 "dma_device_type": 2 00:34:52.119 } 00:34:52.119 ], 00:34:52.119 "driver_specific": { 00:34:52.119 "raid": { 00:34:52.119 "uuid": "1653f3a2-ea5a-4fd3-aa11-3914feb2433c", 00:34:52.119 "strip_size_kb": 64, 00:34:52.119 "state": "online", 00:34:52.119 "raid_level": "raid0", 00:34:52.119 "superblock": true, 00:34:52.119 "num_base_bdevs": 2, 00:34:52.119 "num_base_bdevs_discovered": 2, 00:34:52.119 "num_base_bdevs_operational": 2, 00:34:52.119 "base_bdevs_list": [ 00:34:52.119 { 00:34:52.119 "name": "pt1", 00:34:52.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:52.119 "is_configured": true, 00:34:52.119 "data_offset": 2048, 00:34:52.119 "data_size": 63488 00:34:52.119 }, 00:34:52.119 { 00:34:52.119 "name": "pt2", 00:34:52.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:52.119 "is_configured": true, 00:34:52.119 "data_offset": 2048, 00:34:52.119 "data_size": 63488 00:34:52.119 } 00:34:52.119 ] 00:34:52.119 } 00:34:52.119 } 00:34:52.119 }' 00:34:52.119 05:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:52.119 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:52.119 pt2' 00:34:52.119 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:52.378 [2024-12-09 05:25:39.221378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1653f3a2-ea5a-4fd3-aa11-3914feb2433c '!=' 1653f3a2-ea5a-4fd3-aa11-3914feb2433c ']' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61188 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61188 ']' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61188 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61188 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.378 killing process with pid 61188 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61188' 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61188 00:34:52.378 [2024-12-09 05:25:39.305590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:52.378 05:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61188 00:34:52.378 [2024-12-09 05:25:39.305706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:52.378 [2024-12-09 05:25:39.305792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:52.378 [2024-12-09 05:25:39.305813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:52.637 [2024-12-09 05:25:39.531416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:54.012 05:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:34:54.012 00:34:54.012 real 0m5.071s 00:34:54.012 user 0m7.319s 00:34:54.012 sys 0m0.818s 00:34:54.012 05:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.012 ************************************ 00:34:54.012 END TEST raid_superblock_test 00:34:54.012 ************************************ 00:34:54.012 05:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.012 05:25:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:34:54.012 05:25:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:54.012 05:25:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.012 05:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:54.012 ************************************ 00:34:54.012 START TEST raid_read_error_test 00:34:54.012 ************************************ 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Meoy3gyRrV 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61400 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61400 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61400 ']' 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.012 05:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.012 [2024-12-09 05:25:40.880911] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:54.012 [2024-12-09 05:25:40.881090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61400 ] 00:34:54.271 [2024-12-09 05:25:41.068408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.271 [2024-12-09 05:25:41.188050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.529 [2024-12-09 05:25:41.389425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:54.529 [2024-12-09 05:25:41.389490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 BaseBdev1_malloc 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 true 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 [2024-12-09 05:25:41.859311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:55.096 [2024-12-09 05:25:41.859390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:55.096 [2024-12-09 05:25:41.859417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:55.096 [2024-12-09 05:25:41.859435] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:55.096 [2024-12-09 05:25:41.862295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:55.096 [2024-12-09 05:25:41.862386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:55.096 BaseBdev1 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 BaseBdev2_malloc 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 true 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 [2024-12-09 05:25:41.915418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:55.096 [2024-12-09 05:25:41.915502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:55.096 [2024-12-09 05:25:41.915525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:55.096 [2024-12-09 05:25:41.915543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:55.096 [2024-12-09 05:25:41.918518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:55.096 [2024-12-09 05:25:41.918577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:55.096 BaseBdev2 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.096 [2024-12-09 05:25:41.923566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:55.096 [2024-12-09 05:25:41.926269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:55.096 [2024-12-09 05:25:41.926599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:55.096 [2024-12-09 05:25:41.926624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:55.096 [2024-12-09 05:25:41.926979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:55.096 [2024-12-09 05:25:41.927270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:55.096 [2024-12-09 05:25:41.927291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:55.096 [2024-12-09 05:25:41.927528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:55.096 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:55.097 "name": "raid_bdev1", 00:34:55.097 "uuid": "7e874b1e-e4b1-4397-b46e-67cd34f277cf", 00:34:55.097 "strip_size_kb": 64, 00:34:55.097 "state": "online", 00:34:55.097 "raid_level": "raid0", 00:34:55.097 "superblock": true, 00:34:55.097 "num_base_bdevs": 2, 00:34:55.097 "num_base_bdevs_discovered": 2, 00:34:55.097 "num_base_bdevs_operational": 2, 00:34:55.097 "base_bdevs_list": [ 00:34:55.097 { 00:34:55.097 "name": "BaseBdev1", 00:34:55.097 "uuid": "1bb72ec0-12e9-58ee-a518-15fe9f9a27c6", 00:34:55.097 "is_configured": true, 00:34:55.097 "data_offset": 2048, 00:34:55.097 "data_size": 63488 00:34:55.097 }, 00:34:55.097 { 00:34:55.097 "name": "BaseBdev2", 00:34:55.097 "uuid": "770d45d4-e671-57b3-a5d2-4dfbc3479059", 00:34:55.097 "is_configured": true, 00:34:55.097 "data_offset": 2048, 00:34:55.097 "data_size": 63488 00:34:55.097 } 00:34:55.097 ] 00:34:55.097 }' 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:55.097 05:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.663 05:25:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:55.663 05:25:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:55.663 [2024-12-09 05:25:42.569871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:56.598 "name": "raid_bdev1", 00:34:56.598 "uuid": "7e874b1e-e4b1-4397-b46e-67cd34f277cf", 00:34:56.598 "strip_size_kb": 64, 00:34:56.598 "state": "online", 00:34:56.598 "raid_level": "raid0", 00:34:56.598 "superblock": true, 00:34:56.598 "num_base_bdevs": 2, 00:34:56.598 "num_base_bdevs_discovered": 2, 00:34:56.598 "num_base_bdevs_operational": 2, 00:34:56.598 "base_bdevs_list": [ 00:34:56.598 { 00:34:56.598 "name": "BaseBdev1", 00:34:56.598 "uuid": "1bb72ec0-12e9-58ee-a518-15fe9f9a27c6", 00:34:56.598 "is_configured": true, 00:34:56.598 "data_offset": 2048, 00:34:56.598 "data_size": 63488 00:34:56.598 }, 00:34:56.598 { 00:34:56.598 "name": "BaseBdev2", 00:34:56.598 "uuid": "770d45d4-e671-57b3-a5d2-4dfbc3479059", 00:34:56.598 "is_configured": true, 00:34:56.598 "data_offset": 2048, 00:34:56.598 "data_size": 63488 00:34:56.598 } 00:34:56.598 ] 00:34:56.598 }' 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:56.598 05:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.165 05:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:57.165 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.165 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.165 [2024-12-09 05:25:44.005461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:57.165 [2024-12-09 05:25:44.005509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:57.165 [2024-12-09 05:25:44.009501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:57.165 [2024-12-09 05:25:44.009611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:57.165 [2024-12-09 05:25:44.009662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:57.165 [2024-12-09 05:25:44.009682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:57.165 { 00:34:57.165 "results": [ 00:34:57.165 { 00:34:57.165 "job": "raid_bdev1", 00:34:57.165 "core_mask": "0x1", 00:34:57.165 "workload": "randrw", 00:34:57.165 "percentage": 50, 00:34:57.165 "status": "finished", 00:34:57.165 "queue_depth": 1, 00:34:57.166 "io_size": 131072, 00:34:57.166 "runtime": 1.432727, 00:34:57.166 "iops": 8913.770732316763, 00:34:57.166 "mibps": 1114.2213415395954, 00:34:57.166 "io_failed": 1, 00:34:57.166 "io_timeout": 0, 00:34:57.166 "avg_latency_us": 156.21897446117927, 00:34:57.166 "min_latency_us": 38.86545454545455, 00:34:57.166 "max_latency_us": 2085.2363636363634 00:34:57.166 } 00:34:57.166 ], 00:34:57.166 "core_count": 1 00:34:57.166 } 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61400 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61400 ']' 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61400 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61400 00:34:57.166 killing process with pid 61400 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61400' 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61400 00:34:57.166 [2024-12-09 05:25:44.054108] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:57.166 05:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61400 00:34:57.424 [2024-12-09 05:25:44.190885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:58.847 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:58.847 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Meoy3gyRrV 00:34:58.847 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:58.847 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:34:58.848 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:34:58.848 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:58.848 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:58.848 ************************************ 00:34:58.848 END TEST raid_read_error_test 00:34:58.848 ************************************ 00:34:58.848 05:25:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:34:58.848 00:34:58.848 real 0m4.816s 00:34:58.848 user 0m5.896s 00:34:58.848 sys 0m0.614s 00:34:58.848 05:25:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.848 05:25:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.848 05:25:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:34:58.848 05:25:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:58.848 05:25:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.848 05:25:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:58.848 ************************************ 00:34:58.848 START TEST raid_write_error_test 00:34:58.848 ************************************ 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zDJzKNC7Pk 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61551 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61551 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61551 ']' 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:58.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:58.848 05:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.848 [2024-12-09 05:25:45.751252] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:34:58.848 [2024-12-09 05:25:45.751728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61551 ] 00:34:59.106 [2024-12-09 05:25:45.942966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.106 [2024-12-09 05:25:46.063999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.364 [2024-12-09 05:25:46.284895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:59.364 [2024-12-09 05:25:46.284936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 BaseBdev1_malloc 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 true 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 [2024-12-09 05:25:46.760448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:59.932 [2024-12-09 05:25:46.760529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:59.932 [2024-12-09 05:25:46.760559] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:59.932 [2024-12-09 05:25:46.760576] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:59.932 [2024-12-09 05:25:46.763626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:59.932 [2024-12-09 05:25:46.763703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:59.932 BaseBdev1 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 BaseBdev2_malloc 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 true 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 [2024-12-09 05:25:46.821213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:59.932 [2024-12-09 05:25:46.821307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:59.932 [2024-12-09 05:25:46.821331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:59.932 [2024-12-09 05:25:46.821346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:59.932 [2024-12-09 05:25:46.824264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:59.932 [2024-12-09 05:25:46.824323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:59.932 BaseBdev2 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 [2024-12-09 05:25:46.829317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:59.932 [2024-12-09 05:25:46.831806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:59.932 [2024-12-09 05:25:46.832103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:59.932 [2024-12-09 05:25:46.832128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:59.932 [2024-12-09 05:25:46.832442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:59.932 [2024-12-09 05:25:46.832666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:59.932 [2024-12-09 05:25:46.832687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:59.932 [2024-12-09 05:25:46.832930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.932 "name": "raid_bdev1", 00:34:59.932 "uuid": "030150ce-f9f1-4401-a3e7-88479b69c1b0", 00:34:59.932 "strip_size_kb": 64, 00:34:59.932 "state": "online", 00:34:59.932 "raid_level": "raid0", 00:34:59.932 "superblock": true, 00:34:59.932 "num_base_bdevs": 2, 00:34:59.932 "num_base_bdevs_discovered": 2, 00:34:59.932 "num_base_bdevs_operational": 2, 00:34:59.932 "base_bdevs_list": [ 00:34:59.932 { 00:34:59.932 "name": "BaseBdev1", 00:34:59.932 "uuid": "b54149df-7f26-5d12-91f7-3ff3ef43527a", 00:34:59.932 "is_configured": true, 00:34:59.932 "data_offset": 2048, 00:34:59.932 "data_size": 63488 00:34:59.932 }, 00:34:59.932 { 00:34:59.932 "name": "BaseBdev2", 00:34:59.932 "uuid": "1c3ccb21-d33b-55ae-9ab3-54bd6c85a839", 00:34:59.932 "is_configured": true, 00:34:59.932 "data_offset": 2048, 00:34:59.932 "data_size": 63488 00:34:59.932 } 00:34:59.932 ] 00:34:59.932 }' 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.932 05:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.499 05:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:00.499 05:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:00.499 [2024-12-09 05:25:47.443306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.435 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.694 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.694 "name": "raid_bdev1", 00:35:01.694 "uuid": "030150ce-f9f1-4401-a3e7-88479b69c1b0", 00:35:01.694 "strip_size_kb": 64, 00:35:01.694 "state": "online", 00:35:01.694 "raid_level": "raid0", 00:35:01.694 "superblock": true, 00:35:01.694 "num_base_bdevs": 2, 00:35:01.694 "num_base_bdevs_discovered": 2, 00:35:01.694 "num_base_bdevs_operational": 2, 00:35:01.694 "base_bdevs_list": [ 00:35:01.694 { 00:35:01.694 "name": "BaseBdev1", 00:35:01.694 "uuid": "b54149df-7f26-5d12-91f7-3ff3ef43527a", 00:35:01.694 "is_configured": true, 00:35:01.694 "data_offset": 2048, 00:35:01.694 "data_size": 63488 00:35:01.694 }, 00:35:01.694 { 00:35:01.694 "name": "BaseBdev2", 00:35:01.694 "uuid": "1c3ccb21-d33b-55ae-9ab3-54bd6c85a839", 00:35:01.694 "is_configured": true, 00:35:01.694 "data_offset": 2048, 00:35:01.694 "data_size": 63488 00:35:01.694 } 00:35:01.694 ] 00:35:01.694 }' 00:35:01.694 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.694 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.953 [2024-12-09 05:25:48.899003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:01.953 [2024-12-09 05:25:48.899050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:01.953 [2024-12-09 05:25:48.902740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:01.953 [2024-12-09 05:25:48.902826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:01.953 [2024-12-09 05:25:48.902874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:01.953 [2024-12-09 05:25:48.902893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.953 { 00:35:01.953 "results": [ 00:35:01.953 { 00:35:01.953 "job": "raid_bdev1", 00:35:01.953 "core_mask": "0x1", 00:35:01.953 "workload": "randrw", 00:35:01.953 "percentage": 50, 00:35:01.953 "status": "finished", 00:35:01.953 "queue_depth": 1, 00:35:01.953 "io_size": 131072, 00:35:01.953 "runtime": 1.452874, 00:35:01.953 "iops": 9319.459223580297, 00:35:01.953 "mibps": 1164.9324029475372, 00:35:01.953 "io_failed": 1, 00:35:01.953 "io_timeout": 0, 00:35:01.953 "avg_latency_us": 149.79040355553167, 00:35:01.953 "min_latency_us": 38.86545454545455, 00:35:01.953 "max_latency_us": 2293.76 00:35:01.953 } 00:35:01.953 ], 00:35:01.953 "core_count": 1 00:35:01.953 } 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61551 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61551 ']' 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61551 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.953 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61551 00:35:02.212 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:02.212 killing process with pid 61551 00:35:02.212 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:02.212 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61551' 00:35:02.212 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61551 00:35:02.212 [2024-12-09 05:25:48.939221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:02.212 05:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61551 00:35:02.212 [2024-12-09 05:25:49.073445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zDJzKNC7Pk 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:35:03.587 00:35:03.587 real 0m4.643s 00:35:03.587 user 0m5.739s 00:35:03.587 sys 0m0.600s 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.587 ************************************ 00:35:03.587 05:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.587 END TEST raid_write_error_test 00:35:03.587 ************************************ 00:35:03.587 05:25:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:35:03.587 05:25:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:35:03.587 05:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:03.587 05:25:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.587 05:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:03.587 ************************************ 00:35:03.587 START TEST raid_state_function_test 00:35:03.587 ************************************ 00:35:03.587 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:35:03.587 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:35:03.587 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:35:03.587 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:35:03.587 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:03.587 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61695 00:35:03.588 Process raid pid: 61695 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61695' 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61695 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61695 ']' 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.588 05:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.588 [2024-12-09 05:25:50.438267] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:03.588 [2024-12-09 05:25:50.438535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.846 [2024-12-09 05:25:50.640353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.846 [2024-12-09 05:25:50.767442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.105 [2024-12-09 05:25:50.980721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:04.105 [2024-12-09 05:25:50.980796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.671 [2024-12-09 05:25:51.396096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:04.671 [2024-12-09 05:25:51.396213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:04.671 [2024-12-09 05:25:51.396247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:04.671 [2024-12-09 05:25:51.396264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.671 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:04.671 "name": "Existed_Raid", 00:35:04.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.671 "strip_size_kb": 64, 00:35:04.671 "state": "configuring", 00:35:04.671 "raid_level": "concat", 00:35:04.671 "superblock": false, 00:35:04.671 "num_base_bdevs": 2, 00:35:04.671 "num_base_bdevs_discovered": 0, 00:35:04.671 "num_base_bdevs_operational": 2, 00:35:04.671 "base_bdevs_list": [ 00:35:04.671 { 00:35:04.671 "name": "BaseBdev1", 00:35:04.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.671 "is_configured": false, 00:35:04.671 "data_offset": 0, 00:35:04.671 "data_size": 0 00:35:04.671 }, 00:35:04.671 { 00:35:04.671 "name": "BaseBdev2", 00:35:04.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.672 "is_configured": false, 00:35:04.672 "data_offset": 0, 00:35:04.672 "data_size": 0 00:35:04.672 } 00:35:04.672 ] 00:35:04.672 }' 00:35:04.672 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:04.672 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 [2024-12-09 05:25:51.924238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:05.241 [2024-12-09 05:25:51.924293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 [2024-12-09 05:25:51.932213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:05.241 [2024-12-09 05:25:51.932292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:05.241 [2024-12-09 05:25:51.932308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:05.241 [2024-12-09 05:25:51.932326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 [2024-12-09 05:25:51.978996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:05.241 BaseBdev1 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.241 05:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 [ 00:35:05.241 { 00:35:05.241 "name": "BaseBdev1", 00:35:05.241 "aliases": [ 00:35:05.241 "0839ec6a-f488-4a59-90a5-69725eca2361" 00:35:05.241 ], 00:35:05.241 "product_name": "Malloc disk", 00:35:05.241 "block_size": 512, 00:35:05.241 "num_blocks": 65536, 00:35:05.241 "uuid": "0839ec6a-f488-4a59-90a5-69725eca2361", 00:35:05.241 "assigned_rate_limits": { 00:35:05.241 "rw_ios_per_sec": 0, 00:35:05.241 "rw_mbytes_per_sec": 0, 00:35:05.241 "r_mbytes_per_sec": 0, 00:35:05.241 "w_mbytes_per_sec": 0 00:35:05.241 }, 00:35:05.241 "claimed": true, 00:35:05.241 "claim_type": "exclusive_write", 00:35:05.241 "zoned": false, 00:35:05.241 "supported_io_types": { 00:35:05.241 "read": true, 00:35:05.241 "write": true, 00:35:05.241 "unmap": true, 00:35:05.241 "flush": true, 00:35:05.241 "reset": true, 00:35:05.241 "nvme_admin": false, 00:35:05.241 "nvme_io": false, 00:35:05.241 "nvme_io_md": false, 00:35:05.241 "write_zeroes": true, 00:35:05.241 "zcopy": true, 00:35:05.241 "get_zone_info": false, 00:35:05.241 "zone_management": false, 00:35:05.241 "zone_append": false, 00:35:05.241 "compare": false, 00:35:05.241 "compare_and_write": false, 00:35:05.241 "abort": true, 00:35:05.241 "seek_hole": false, 00:35:05.241 "seek_data": false, 00:35:05.241 "copy": true, 00:35:05.241 "nvme_iov_md": false 00:35:05.241 }, 00:35:05.241 "memory_domains": [ 00:35:05.241 { 00:35:05.241 "dma_device_id": "system", 00:35:05.241 "dma_device_type": 1 00:35:05.241 }, 00:35:05.241 { 00:35:05.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.241 "dma_device_type": 2 00:35:05.241 } 00:35:05.241 ], 00:35:05.241 "driver_specific": {} 00:35:05.241 } 00:35:05.241 ] 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.241 "name": "Existed_Raid", 00:35:05.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.241 "strip_size_kb": 64, 00:35:05.241 "state": "configuring", 00:35:05.241 "raid_level": "concat", 00:35:05.241 "superblock": false, 00:35:05.241 "num_base_bdevs": 2, 00:35:05.241 "num_base_bdevs_discovered": 1, 00:35:05.241 "num_base_bdevs_operational": 2, 00:35:05.241 "base_bdevs_list": [ 00:35:05.241 { 00:35:05.241 "name": "BaseBdev1", 00:35:05.241 "uuid": "0839ec6a-f488-4a59-90a5-69725eca2361", 00:35:05.241 "is_configured": true, 00:35:05.241 "data_offset": 0, 00:35:05.241 "data_size": 65536 00:35:05.241 }, 00:35:05.241 { 00:35:05.241 "name": "BaseBdev2", 00:35:05.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.241 "is_configured": false, 00:35:05.241 "data_offset": 0, 00:35:05.241 "data_size": 0 00:35:05.241 } 00:35:05.241 ] 00:35:05.241 }' 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.241 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.809 [2024-12-09 05:25:52.543179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:05.809 [2024-12-09 05:25:52.543245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.809 [2024-12-09 05:25:52.551211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:05.809 [2024-12-09 05:25:52.553656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:05.809 [2024-12-09 05:25:52.553718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.809 "name": "Existed_Raid", 00:35:05.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.809 "strip_size_kb": 64, 00:35:05.809 "state": "configuring", 00:35:05.809 "raid_level": "concat", 00:35:05.809 "superblock": false, 00:35:05.809 "num_base_bdevs": 2, 00:35:05.809 "num_base_bdevs_discovered": 1, 00:35:05.809 "num_base_bdevs_operational": 2, 00:35:05.809 "base_bdevs_list": [ 00:35:05.809 { 00:35:05.809 "name": "BaseBdev1", 00:35:05.809 "uuid": "0839ec6a-f488-4a59-90a5-69725eca2361", 00:35:05.809 "is_configured": true, 00:35:05.809 "data_offset": 0, 00:35:05.809 "data_size": 65536 00:35:05.809 }, 00:35:05.809 { 00:35:05.809 "name": "BaseBdev2", 00:35:05.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.809 "is_configured": false, 00:35:05.809 "data_offset": 0, 00:35:05.809 "data_size": 0 00:35:05.809 } 00:35:05.809 ] 00:35:05.809 }' 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.809 05:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.375 [2024-12-09 05:25:53.122204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:06.375 [2024-12-09 05:25:53.122277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:06.375 [2024-12-09 05:25:53.122291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:35:06.375 [2024-12-09 05:25:53.122768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:06.375 [2024-12-09 05:25:53.123017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:06.375 [2024-12-09 05:25:53.123040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:06.375 [2024-12-09 05:25:53.123455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:06.375 BaseBdev2 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.375 [ 00:35:06.375 { 00:35:06.375 "name": "BaseBdev2", 00:35:06.375 "aliases": [ 00:35:06.375 "85a96c50-a36f-4ebf-b555-81662cf754a5" 00:35:06.375 ], 00:35:06.375 "product_name": "Malloc disk", 00:35:06.375 "block_size": 512, 00:35:06.375 "num_blocks": 65536, 00:35:06.375 "uuid": "85a96c50-a36f-4ebf-b555-81662cf754a5", 00:35:06.375 "assigned_rate_limits": { 00:35:06.375 "rw_ios_per_sec": 0, 00:35:06.375 "rw_mbytes_per_sec": 0, 00:35:06.375 "r_mbytes_per_sec": 0, 00:35:06.375 "w_mbytes_per_sec": 0 00:35:06.375 }, 00:35:06.375 "claimed": true, 00:35:06.375 "claim_type": "exclusive_write", 00:35:06.375 "zoned": false, 00:35:06.375 "supported_io_types": { 00:35:06.375 "read": true, 00:35:06.375 "write": true, 00:35:06.375 "unmap": true, 00:35:06.375 "flush": true, 00:35:06.375 "reset": true, 00:35:06.375 "nvme_admin": false, 00:35:06.375 "nvme_io": false, 00:35:06.375 "nvme_io_md": false, 00:35:06.375 "write_zeroes": true, 00:35:06.375 "zcopy": true, 00:35:06.375 "get_zone_info": false, 00:35:06.375 "zone_management": false, 00:35:06.375 "zone_append": false, 00:35:06.375 "compare": false, 00:35:06.375 "compare_and_write": false, 00:35:06.375 "abort": true, 00:35:06.375 "seek_hole": false, 00:35:06.375 "seek_data": false, 00:35:06.375 "copy": true, 00:35:06.375 "nvme_iov_md": false 00:35:06.375 }, 00:35:06.375 "memory_domains": [ 00:35:06.375 { 00:35:06.375 "dma_device_id": "system", 00:35:06.375 "dma_device_type": 1 00:35:06.375 }, 00:35:06.375 { 00:35:06.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.375 "dma_device_type": 2 00:35:06.375 } 00:35:06.375 ], 00:35:06.375 "driver_specific": {} 00:35:06.375 } 00:35:06.375 ] 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:06.375 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:06.376 "name": "Existed_Raid", 00:35:06.376 "uuid": "e8c53861-5f60-4d19-9231-c3e724b03902", 00:35:06.376 "strip_size_kb": 64, 00:35:06.376 "state": "online", 00:35:06.376 "raid_level": "concat", 00:35:06.376 "superblock": false, 00:35:06.376 "num_base_bdevs": 2, 00:35:06.376 "num_base_bdevs_discovered": 2, 00:35:06.376 "num_base_bdevs_operational": 2, 00:35:06.376 "base_bdevs_list": [ 00:35:06.376 { 00:35:06.376 "name": "BaseBdev1", 00:35:06.376 "uuid": "0839ec6a-f488-4a59-90a5-69725eca2361", 00:35:06.376 "is_configured": true, 00:35:06.376 "data_offset": 0, 00:35:06.376 "data_size": 65536 00:35:06.376 }, 00:35:06.376 { 00:35:06.376 "name": "BaseBdev2", 00:35:06.376 "uuid": "85a96c50-a36f-4ebf-b555-81662cf754a5", 00:35:06.376 "is_configured": true, 00:35:06.376 "data_offset": 0, 00:35:06.376 "data_size": 65536 00:35:06.376 } 00:35:06.376 ] 00:35:06.376 }' 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:06.376 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.942 [2024-12-09 05:25:53.674814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:06.942 "name": "Existed_Raid", 00:35:06.942 "aliases": [ 00:35:06.942 "e8c53861-5f60-4d19-9231-c3e724b03902" 00:35:06.942 ], 00:35:06.942 "product_name": "Raid Volume", 00:35:06.942 "block_size": 512, 00:35:06.942 "num_blocks": 131072, 00:35:06.942 "uuid": "e8c53861-5f60-4d19-9231-c3e724b03902", 00:35:06.942 "assigned_rate_limits": { 00:35:06.942 "rw_ios_per_sec": 0, 00:35:06.942 "rw_mbytes_per_sec": 0, 00:35:06.942 "r_mbytes_per_sec": 0, 00:35:06.942 "w_mbytes_per_sec": 0 00:35:06.942 }, 00:35:06.942 "claimed": false, 00:35:06.942 "zoned": false, 00:35:06.942 "supported_io_types": { 00:35:06.942 "read": true, 00:35:06.942 "write": true, 00:35:06.942 "unmap": true, 00:35:06.942 "flush": true, 00:35:06.942 "reset": true, 00:35:06.942 "nvme_admin": false, 00:35:06.942 "nvme_io": false, 00:35:06.942 "nvme_io_md": false, 00:35:06.942 "write_zeroes": true, 00:35:06.942 "zcopy": false, 00:35:06.942 "get_zone_info": false, 00:35:06.942 "zone_management": false, 00:35:06.942 "zone_append": false, 00:35:06.942 "compare": false, 00:35:06.942 "compare_and_write": false, 00:35:06.942 "abort": false, 00:35:06.942 "seek_hole": false, 00:35:06.942 "seek_data": false, 00:35:06.942 "copy": false, 00:35:06.942 "nvme_iov_md": false 00:35:06.942 }, 00:35:06.942 "memory_domains": [ 00:35:06.942 { 00:35:06.942 "dma_device_id": "system", 00:35:06.942 "dma_device_type": 1 00:35:06.942 }, 00:35:06.942 { 00:35:06.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.942 "dma_device_type": 2 00:35:06.942 }, 00:35:06.942 { 00:35:06.942 "dma_device_id": "system", 00:35:06.942 "dma_device_type": 1 00:35:06.942 }, 00:35:06.942 { 00:35:06.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.942 "dma_device_type": 2 00:35:06.942 } 00:35:06.942 ], 00:35:06.942 "driver_specific": { 00:35:06.942 "raid": { 00:35:06.942 "uuid": "e8c53861-5f60-4d19-9231-c3e724b03902", 00:35:06.942 "strip_size_kb": 64, 00:35:06.942 "state": "online", 00:35:06.942 "raid_level": "concat", 00:35:06.942 "superblock": false, 00:35:06.942 "num_base_bdevs": 2, 00:35:06.942 "num_base_bdevs_discovered": 2, 00:35:06.942 "num_base_bdevs_operational": 2, 00:35:06.942 "base_bdevs_list": [ 00:35:06.942 { 00:35:06.942 "name": "BaseBdev1", 00:35:06.942 "uuid": "0839ec6a-f488-4a59-90a5-69725eca2361", 00:35:06.942 "is_configured": true, 00:35:06.942 "data_offset": 0, 00:35:06.942 "data_size": 65536 00:35:06.942 }, 00:35:06.942 { 00:35:06.942 "name": "BaseBdev2", 00:35:06.942 "uuid": "85a96c50-a36f-4ebf-b555-81662cf754a5", 00:35:06.942 "is_configured": true, 00:35:06.942 "data_offset": 0, 00:35:06.942 "data_size": 65536 00:35:06.942 } 00:35:06.942 ] 00:35:06.942 } 00:35:06.942 } 00:35:06.942 }' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:06.942 BaseBdev2' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:06.942 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.200 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:07.200 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:07.200 05:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:07.200 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.200 05:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.200 [2024-12-09 05:25:53.938643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:07.200 [2024-12-09 05:25:53.938691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:07.200 [2024-12-09 05:25:53.938777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:07.200 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.200 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:07.200 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:35:07.200 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:07.200 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.201 "name": "Existed_Raid", 00:35:07.201 "uuid": "e8c53861-5f60-4d19-9231-c3e724b03902", 00:35:07.201 "strip_size_kb": 64, 00:35:07.201 "state": "offline", 00:35:07.201 "raid_level": "concat", 00:35:07.201 "superblock": false, 00:35:07.201 "num_base_bdevs": 2, 00:35:07.201 "num_base_bdevs_discovered": 1, 00:35:07.201 "num_base_bdevs_operational": 1, 00:35:07.201 "base_bdevs_list": [ 00:35:07.201 { 00:35:07.201 "name": null, 00:35:07.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.201 "is_configured": false, 00:35:07.201 "data_offset": 0, 00:35:07.201 "data_size": 65536 00:35:07.201 }, 00:35:07.201 { 00:35:07.201 "name": "BaseBdev2", 00:35:07.201 "uuid": "85a96c50-a36f-4ebf-b555-81662cf754a5", 00:35:07.201 "is_configured": true, 00:35:07.201 "data_offset": 0, 00:35:07.201 "data_size": 65536 00:35:07.201 } 00:35:07.201 ] 00:35:07.201 }' 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.201 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.767 [2024-12-09 05:25:54.622646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:07.767 [2024-12-09 05:25:54.622731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.767 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61695 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61695 ']' 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61695 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61695 00:35:08.026 killing process with pid 61695 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61695' 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61695 00:35:08.026 [2024-12-09 05:25:54.790937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:08.026 05:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61695 00:35:08.026 [2024-12-09 05:25:54.805627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:08.959 05:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:08.959 00:35:08.959 real 0m5.584s 00:35:08.959 user 0m8.354s 00:35:08.959 sys 0m0.828s 00:35:08.959 05:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.959 ************************************ 00:35:08.959 END TEST raid_state_function_test 00:35:08.959 05:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.959 ************************************ 00:35:09.217 05:25:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:35:09.217 05:25:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:09.217 05:25:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.217 05:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:09.217 ************************************ 00:35:09.217 START TEST raid_state_function_test_sb 00:35:09.217 ************************************ 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61953 00:35:09.217 Process raid pid: 61953 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61953' 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61953 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61953 ']' 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.217 05:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.217 [2024-12-09 05:25:56.063706] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:09.217 [2024-12-09 05:25:56.063880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:09.475 [2024-12-09 05:25:56.250363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.475 [2024-12-09 05:25:56.419904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.732 [2024-12-09 05:25:56.633828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:09.732 [2024-12-09 05:25:56.633873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.297 [2024-12-09 05:25:57.066664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:10.297 [2024-12-09 05:25:57.066746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:10.297 [2024-12-09 05:25:57.066763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:10.297 [2024-12-09 05:25:57.066807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:10.297 "name": "Existed_Raid", 00:35:10.297 "uuid": "27c57c6c-d268-45d5-b04e-59b12a8aa75c", 00:35:10.297 "strip_size_kb": 64, 00:35:10.297 "state": "configuring", 00:35:10.297 "raid_level": "concat", 00:35:10.297 "superblock": true, 00:35:10.297 "num_base_bdevs": 2, 00:35:10.297 "num_base_bdevs_discovered": 0, 00:35:10.297 "num_base_bdevs_operational": 2, 00:35:10.297 "base_bdevs_list": [ 00:35:10.297 { 00:35:10.297 "name": "BaseBdev1", 00:35:10.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.297 "is_configured": false, 00:35:10.297 "data_offset": 0, 00:35:10.297 "data_size": 0 00:35:10.297 }, 00:35:10.297 { 00:35:10.297 "name": "BaseBdev2", 00:35:10.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.297 "is_configured": false, 00:35:10.297 "data_offset": 0, 00:35:10.297 "data_size": 0 00:35:10.297 } 00:35:10.297 ] 00:35:10.297 }' 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:10.297 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.863 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:10.863 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.863 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.863 [2024-12-09 05:25:57.606828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:10.863 [2024-12-09 05:25:57.606952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:10.863 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.863 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.864 [2024-12-09 05:25:57.614791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:10.864 [2024-12-09 05:25:57.614855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:10.864 [2024-12-09 05:25:57.614872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:10.864 [2024-12-09 05:25:57.614892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.864 [2024-12-09 05:25:57.661723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:10.864 BaseBdev1 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.864 [ 00:35:10.864 { 00:35:10.864 "name": "BaseBdev1", 00:35:10.864 "aliases": [ 00:35:10.864 "6b17a3c7-8d8f-4529-aad4-12061b9f6602" 00:35:10.864 ], 00:35:10.864 "product_name": "Malloc disk", 00:35:10.864 "block_size": 512, 00:35:10.864 "num_blocks": 65536, 00:35:10.864 "uuid": "6b17a3c7-8d8f-4529-aad4-12061b9f6602", 00:35:10.864 "assigned_rate_limits": { 00:35:10.864 "rw_ios_per_sec": 0, 00:35:10.864 "rw_mbytes_per_sec": 0, 00:35:10.864 "r_mbytes_per_sec": 0, 00:35:10.864 "w_mbytes_per_sec": 0 00:35:10.864 }, 00:35:10.864 "claimed": true, 00:35:10.864 "claim_type": "exclusive_write", 00:35:10.864 "zoned": false, 00:35:10.864 "supported_io_types": { 00:35:10.864 "read": true, 00:35:10.864 "write": true, 00:35:10.864 "unmap": true, 00:35:10.864 "flush": true, 00:35:10.864 "reset": true, 00:35:10.864 "nvme_admin": false, 00:35:10.864 "nvme_io": false, 00:35:10.864 "nvme_io_md": false, 00:35:10.864 "write_zeroes": true, 00:35:10.864 "zcopy": true, 00:35:10.864 "get_zone_info": false, 00:35:10.864 "zone_management": false, 00:35:10.864 "zone_append": false, 00:35:10.864 "compare": false, 00:35:10.864 "compare_and_write": false, 00:35:10.864 "abort": true, 00:35:10.864 "seek_hole": false, 00:35:10.864 "seek_data": false, 00:35:10.864 "copy": true, 00:35:10.864 "nvme_iov_md": false 00:35:10.864 }, 00:35:10.864 "memory_domains": [ 00:35:10.864 { 00:35:10.864 "dma_device_id": "system", 00:35:10.864 "dma_device_type": 1 00:35:10.864 }, 00:35:10.864 { 00:35:10.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.864 "dma_device_type": 2 00:35:10.864 } 00:35:10.864 ], 00:35:10.864 "driver_specific": {} 00:35:10.864 } 00:35:10.864 ] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:10.864 "name": "Existed_Raid", 00:35:10.864 "uuid": "c625a063-0d2d-4e8c-a435-ced6efc7b94d", 00:35:10.864 "strip_size_kb": 64, 00:35:10.864 "state": "configuring", 00:35:10.864 "raid_level": "concat", 00:35:10.864 "superblock": true, 00:35:10.864 "num_base_bdevs": 2, 00:35:10.864 "num_base_bdevs_discovered": 1, 00:35:10.864 "num_base_bdevs_operational": 2, 00:35:10.864 "base_bdevs_list": [ 00:35:10.864 { 00:35:10.864 "name": "BaseBdev1", 00:35:10.864 "uuid": "6b17a3c7-8d8f-4529-aad4-12061b9f6602", 00:35:10.864 "is_configured": true, 00:35:10.864 "data_offset": 2048, 00:35:10.864 "data_size": 63488 00:35:10.864 }, 00:35:10.864 { 00:35:10.864 "name": "BaseBdev2", 00:35:10.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.864 "is_configured": false, 00:35:10.864 "data_offset": 0, 00:35:10.864 "data_size": 0 00:35:10.864 } 00:35:10.864 ] 00:35:10.864 }' 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:10.864 05:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.506 [2024-12-09 05:25:58.217962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:11.506 [2024-12-09 05:25:58.218046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.506 [2024-12-09 05:25:58.225990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:11.506 [2024-12-09 05:25:58.228907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:11.506 [2024-12-09 05:25:58.229090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.506 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.507 "name": "Existed_Raid", 00:35:11.507 "uuid": "4d3b9e3c-dd7b-4f88-bdef-7ef8de7d4c1c", 00:35:11.507 "strip_size_kb": 64, 00:35:11.507 "state": "configuring", 00:35:11.507 "raid_level": "concat", 00:35:11.507 "superblock": true, 00:35:11.507 "num_base_bdevs": 2, 00:35:11.507 "num_base_bdevs_discovered": 1, 00:35:11.507 "num_base_bdevs_operational": 2, 00:35:11.507 "base_bdevs_list": [ 00:35:11.507 { 00:35:11.507 "name": "BaseBdev1", 00:35:11.507 "uuid": "6b17a3c7-8d8f-4529-aad4-12061b9f6602", 00:35:11.507 "is_configured": true, 00:35:11.507 "data_offset": 2048, 00:35:11.507 "data_size": 63488 00:35:11.507 }, 00:35:11.507 { 00:35:11.507 "name": "BaseBdev2", 00:35:11.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.507 "is_configured": false, 00:35:11.507 "data_offset": 0, 00:35:11.507 "data_size": 0 00:35:11.507 } 00:35:11.507 ] 00:35:11.507 }' 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.507 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 [2024-12-09 05:25:58.794857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:12.073 [2024-12-09 05:25:58.795423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:12.073 [2024-12-09 05:25:58.795450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:12.073 BaseBdev2 00:35:12.073 [2024-12-09 05:25:58.795815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:12.073 [2024-12-09 05:25:58.796039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:12.073 [2024-12-09 05:25:58.796072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:12.073 [2024-12-09 05:25:58.796253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.073 [ 00:35:12.073 { 00:35:12.073 "name": "BaseBdev2", 00:35:12.073 "aliases": [ 00:35:12.073 "9bdf983b-dcb1-44a0-8faf-6eccf3314337" 00:35:12.073 ], 00:35:12.073 "product_name": "Malloc disk", 00:35:12.073 "block_size": 512, 00:35:12.073 "num_blocks": 65536, 00:35:12.073 "uuid": "9bdf983b-dcb1-44a0-8faf-6eccf3314337", 00:35:12.073 "assigned_rate_limits": { 00:35:12.073 "rw_ios_per_sec": 0, 00:35:12.073 "rw_mbytes_per_sec": 0, 00:35:12.073 "r_mbytes_per_sec": 0, 00:35:12.073 "w_mbytes_per_sec": 0 00:35:12.073 }, 00:35:12.073 "claimed": true, 00:35:12.073 "claim_type": "exclusive_write", 00:35:12.073 "zoned": false, 00:35:12.073 "supported_io_types": { 00:35:12.073 "read": true, 00:35:12.073 "write": true, 00:35:12.073 "unmap": true, 00:35:12.073 "flush": true, 00:35:12.073 "reset": true, 00:35:12.073 "nvme_admin": false, 00:35:12.073 "nvme_io": false, 00:35:12.073 "nvme_io_md": false, 00:35:12.073 "write_zeroes": true, 00:35:12.073 "zcopy": true, 00:35:12.073 "get_zone_info": false, 00:35:12.073 "zone_management": false, 00:35:12.073 "zone_append": false, 00:35:12.073 "compare": false, 00:35:12.073 "compare_and_write": false, 00:35:12.073 "abort": true, 00:35:12.073 "seek_hole": false, 00:35:12.073 "seek_data": false, 00:35:12.073 "copy": true, 00:35:12.073 "nvme_iov_md": false 00:35:12.073 }, 00:35:12.073 "memory_domains": [ 00:35:12.073 { 00:35:12.073 "dma_device_id": "system", 00:35:12.073 "dma_device_type": 1 00:35:12.073 }, 00:35:12.073 { 00:35:12.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:12.073 "dma_device_type": 2 00:35:12.073 } 00:35:12.073 ], 00:35:12.073 "driver_specific": {} 00:35:12.073 } 00:35:12.073 ] 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:12.073 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.074 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.074 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.074 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:12.074 "name": "Existed_Raid", 00:35:12.074 "uuid": "4d3b9e3c-dd7b-4f88-bdef-7ef8de7d4c1c", 00:35:12.074 "strip_size_kb": 64, 00:35:12.074 "state": "online", 00:35:12.074 "raid_level": "concat", 00:35:12.074 "superblock": true, 00:35:12.074 "num_base_bdevs": 2, 00:35:12.074 "num_base_bdevs_discovered": 2, 00:35:12.074 "num_base_bdevs_operational": 2, 00:35:12.074 "base_bdevs_list": [ 00:35:12.074 { 00:35:12.074 "name": "BaseBdev1", 00:35:12.074 "uuid": "6b17a3c7-8d8f-4529-aad4-12061b9f6602", 00:35:12.074 "is_configured": true, 00:35:12.074 "data_offset": 2048, 00:35:12.074 "data_size": 63488 00:35:12.074 }, 00:35:12.074 { 00:35:12.074 "name": "BaseBdev2", 00:35:12.074 "uuid": "9bdf983b-dcb1-44a0-8faf-6eccf3314337", 00:35:12.074 "is_configured": true, 00:35:12.074 "data_offset": 2048, 00:35:12.074 "data_size": 63488 00:35:12.074 } 00:35:12.074 ] 00:35:12.074 }' 00:35:12.074 05:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:12.074 05:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.640 [2024-12-09 05:25:59.367465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.640 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:12.640 "name": "Existed_Raid", 00:35:12.640 "aliases": [ 00:35:12.640 "4d3b9e3c-dd7b-4f88-bdef-7ef8de7d4c1c" 00:35:12.640 ], 00:35:12.640 "product_name": "Raid Volume", 00:35:12.640 "block_size": 512, 00:35:12.640 "num_blocks": 126976, 00:35:12.640 "uuid": "4d3b9e3c-dd7b-4f88-bdef-7ef8de7d4c1c", 00:35:12.640 "assigned_rate_limits": { 00:35:12.640 "rw_ios_per_sec": 0, 00:35:12.640 "rw_mbytes_per_sec": 0, 00:35:12.641 "r_mbytes_per_sec": 0, 00:35:12.641 "w_mbytes_per_sec": 0 00:35:12.641 }, 00:35:12.641 "claimed": false, 00:35:12.641 "zoned": false, 00:35:12.641 "supported_io_types": { 00:35:12.641 "read": true, 00:35:12.641 "write": true, 00:35:12.641 "unmap": true, 00:35:12.641 "flush": true, 00:35:12.641 "reset": true, 00:35:12.641 "nvme_admin": false, 00:35:12.641 "nvme_io": false, 00:35:12.641 "nvme_io_md": false, 00:35:12.641 "write_zeroes": true, 00:35:12.641 "zcopy": false, 00:35:12.641 "get_zone_info": false, 00:35:12.641 "zone_management": false, 00:35:12.641 "zone_append": false, 00:35:12.641 "compare": false, 00:35:12.641 "compare_and_write": false, 00:35:12.641 "abort": false, 00:35:12.641 "seek_hole": false, 00:35:12.641 "seek_data": false, 00:35:12.641 "copy": false, 00:35:12.641 "nvme_iov_md": false 00:35:12.641 }, 00:35:12.641 "memory_domains": [ 00:35:12.641 { 00:35:12.641 "dma_device_id": "system", 00:35:12.641 "dma_device_type": 1 00:35:12.641 }, 00:35:12.641 { 00:35:12.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:12.641 "dma_device_type": 2 00:35:12.641 }, 00:35:12.641 { 00:35:12.641 "dma_device_id": "system", 00:35:12.641 "dma_device_type": 1 00:35:12.641 }, 00:35:12.641 { 00:35:12.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:12.641 "dma_device_type": 2 00:35:12.641 } 00:35:12.641 ], 00:35:12.641 "driver_specific": { 00:35:12.641 "raid": { 00:35:12.641 "uuid": "4d3b9e3c-dd7b-4f88-bdef-7ef8de7d4c1c", 00:35:12.641 "strip_size_kb": 64, 00:35:12.641 "state": "online", 00:35:12.641 "raid_level": "concat", 00:35:12.641 "superblock": true, 00:35:12.641 "num_base_bdevs": 2, 00:35:12.641 "num_base_bdevs_discovered": 2, 00:35:12.641 "num_base_bdevs_operational": 2, 00:35:12.641 "base_bdevs_list": [ 00:35:12.641 { 00:35:12.641 "name": "BaseBdev1", 00:35:12.641 "uuid": "6b17a3c7-8d8f-4529-aad4-12061b9f6602", 00:35:12.641 "is_configured": true, 00:35:12.641 "data_offset": 2048, 00:35:12.641 "data_size": 63488 00:35:12.641 }, 00:35:12.641 { 00:35:12.641 "name": "BaseBdev2", 00:35:12.641 "uuid": "9bdf983b-dcb1-44a0-8faf-6eccf3314337", 00:35:12.641 "is_configured": true, 00:35:12.641 "data_offset": 2048, 00:35:12.641 "data_size": 63488 00:35:12.641 } 00:35:12.641 ] 00:35:12.641 } 00:35:12.641 } 00:35:12.641 }' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:12.641 BaseBdev2' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.641 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.900 [2024-12-09 05:25:59.631717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:12.900 [2024-12-09 05:25:59.631765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:12.900 [2024-12-09 05:25:59.631869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:12.900 "name": "Existed_Raid", 00:35:12.900 "uuid": "4d3b9e3c-dd7b-4f88-bdef-7ef8de7d4c1c", 00:35:12.900 "strip_size_kb": 64, 00:35:12.900 "state": "offline", 00:35:12.900 "raid_level": "concat", 00:35:12.900 "superblock": true, 00:35:12.900 "num_base_bdevs": 2, 00:35:12.900 "num_base_bdevs_discovered": 1, 00:35:12.900 "num_base_bdevs_operational": 1, 00:35:12.900 "base_bdevs_list": [ 00:35:12.900 { 00:35:12.900 "name": null, 00:35:12.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:12.900 "is_configured": false, 00:35:12.900 "data_offset": 0, 00:35:12.900 "data_size": 63488 00:35:12.900 }, 00:35:12.900 { 00:35:12.900 "name": "BaseBdev2", 00:35:12.900 "uuid": "9bdf983b-dcb1-44a0-8faf-6eccf3314337", 00:35:12.900 "is_configured": true, 00:35:12.900 "data_offset": 2048, 00:35:12.900 "data_size": 63488 00:35:12.900 } 00:35:12.900 ] 00:35:12.900 }' 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:12.900 05:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.466 [2024-12-09 05:26:00.298282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:13.466 [2024-12-09 05:26:00.298370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.466 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61953 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61953 ']' 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61953 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61953 00:35:13.725 killing process with pid 61953 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61953' 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61953 00:35:13.725 05:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61953 00:35:13.725 [2024-12-09 05:26:00.485550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:13.725 [2024-12-09 05:26:00.502308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:15.097 ************************************ 00:35:15.097 END TEST raid_state_function_test_sb 00:35:15.097 ************************************ 00:35:15.097 05:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:15.097 00:35:15.097 real 0m5.699s 00:35:15.097 user 0m8.534s 00:35:15.097 sys 0m0.828s 00:35:15.097 05:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.097 05:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.097 05:26:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:35:15.097 05:26:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:15.097 05:26:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.097 05:26:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:15.097 ************************************ 00:35:15.097 START TEST raid_superblock_test 00:35:15.097 ************************************ 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62211 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62211 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62211 ']' 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.097 05:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.097 [2024-12-09 05:26:01.820341] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:15.097 [2024-12-09 05:26:01.820804] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62211 ] 00:35:15.097 [2024-12-09 05:26:01.997853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.355 [2024-12-09 05:26:02.133207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.613 [2024-12-09 05:26:02.348684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:15.613 [2024-12-09 05:26:02.348765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.871 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.129 malloc1 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.129 [2024-12-09 05:26:02.858431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:16.129 [2024-12-09 05:26:02.858521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:16.129 [2024-12-09 05:26:02.858552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:16.129 [2024-12-09 05:26:02.858567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:16.129 [2024-12-09 05:26:02.861556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:16.129 [2024-12-09 05:26:02.861598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:16.129 pt1 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.129 malloc2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.129 [2024-12-09 05:26:02.911286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:16.129 [2024-12-09 05:26:02.911509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:16.129 [2024-12-09 05:26:02.911590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:16.129 [2024-12-09 05:26:02.911897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:16.129 [2024-12-09 05:26:02.914536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:16.129 [2024-12-09 05:26:02.914723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:16.129 pt2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.129 [2024-12-09 05:26:02.923521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:16.129 [2024-12-09 05:26:02.925751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:16.129 [2024-12-09 05:26:02.925986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:16.129 [2024-12-09 05:26:02.926005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:16.129 [2024-12-09 05:26:02.926322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:16.129 [2024-12-09 05:26:02.926495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:16.129 [2024-12-09 05:26:02.926513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:16.129 [2024-12-09 05:26:02.926671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:16.129 "name": "raid_bdev1", 00:35:16.129 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:16.129 "strip_size_kb": 64, 00:35:16.129 "state": "online", 00:35:16.129 "raid_level": "concat", 00:35:16.129 "superblock": true, 00:35:16.129 "num_base_bdevs": 2, 00:35:16.129 "num_base_bdevs_discovered": 2, 00:35:16.129 "num_base_bdevs_operational": 2, 00:35:16.129 "base_bdevs_list": [ 00:35:16.129 { 00:35:16.129 "name": "pt1", 00:35:16.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.129 "is_configured": true, 00:35:16.129 "data_offset": 2048, 00:35:16.129 "data_size": 63488 00:35:16.129 }, 00:35:16.129 { 00:35:16.129 "name": "pt2", 00:35:16.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:16.129 "is_configured": true, 00:35:16.129 "data_offset": 2048, 00:35:16.129 "data_size": 63488 00:35:16.129 } 00:35:16.129 ] 00:35:16.129 }' 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:16.129 05:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.694 [2024-12-09 05:26:03.448204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:16.694 "name": "raid_bdev1", 00:35:16.694 "aliases": [ 00:35:16.694 "30bbee84-6264-4e70-86a9-dd25b8276930" 00:35:16.694 ], 00:35:16.694 "product_name": "Raid Volume", 00:35:16.694 "block_size": 512, 00:35:16.694 "num_blocks": 126976, 00:35:16.694 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:16.694 "assigned_rate_limits": { 00:35:16.694 "rw_ios_per_sec": 0, 00:35:16.694 "rw_mbytes_per_sec": 0, 00:35:16.694 "r_mbytes_per_sec": 0, 00:35:16.694 "w_mbytes_per_sec": 0 00:35:16.694 }, 00:35:16.694 "claimed": false, 00:35:16.694 "zoned": false, 00:35:16.694 "supported_io_types": { 00:35:16.694 "read": true, 00:35:16.694 "write": true, 00:35:16.694 "unmap": true, 00:35:16.694 "flush": true, 00:35:16.694 "reset": true, 00:35:16.694 "nvme_admin": false, 00:35:16.694 "nvme_io": false, 00:35:16.694 "nvme_io_md": false, 00:35:16.694 "write_zeroes": true, 00:35:16.694 "zcopy": false, 00:35:16.694 "get_zone_info": false, 00:35:16.694 "zone_management": false, 00:35:16.694 "zone_append": false, 00:35:16.694 "compare": false, 00:35:16.694 "compare_and_write": false, 00:35:16.694 "abort": false, 00:35:16.694 "seek_hole": false, 00:35:16.694 "seek_data": false, 00:35:16.694 "copy": false, 00:35:16.694 "nvme_iov_md": false 00:35:16.694 }, 00:35:16.694 "memory_domains": [ 00:35:16.694 { 00:35:16.694 "dma_device_id": "system", 00:35:16.694 "dma_device_type": 1 00:35:16.694 }, 00:35:16.694 { 00:35:16.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:16.694 "dma_device_type": 2 00:35:16.694 }, 00:35:16.694 { 00:35:16.694 "dma_device_id": "system", 00:35:16.694 "dma_device_type": 1 00:35:16.694 }, 00:35:16.694 { 00:35:16.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:16.694 "dma_device_type": 2 00:35:16.694 } 00:35:16.694 ], 00:35:16.694 "driver_specific": { 00:35:16.694 "raid": { 00:35:16.694 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:16.694 "strip_size_kb": 64, 00:35:16.694 "state": "online", 00:35:16.694 "raid_level": "concat", 00:35:16.694 "superblock": true, 00:35:16.694 "num_base_bdevs": 2, 00:35:16.694 "num_base_bdevs_discovered": 2, 00:35:16.694 "num_base_bdevs_operational": 2, 00:35:16.694 "base_bdevs_list": [ 00:35:16.694 { 00:35:16.694 "name": "pt1", 00:35:16.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.694 "is_configured": true, 00:35:16.694 "data_offset": 2048, 00:35:16.694 "data_size": 63488 00:35:16.694 }, 00:35:16.694 { 00:35:16.694 "name": "pt2", 00:35:16.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:16.694 "is_configured": true, 00:35:16.694 "data_offset": 2048, 00:35:16.694 "data_size": 63488 00:35:16.694 } 00:35:16.694 ] 00:35:16.694 } 00:35:16.694 } 00:35:16.694 }' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:16.694 pt2' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:16.694 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:16.695 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.695 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 [2024-12-09 05:26:03.716130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=30bbee84-6264-4e70-86a9-dd25b8276930 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 30bbee84-6264-4e70-86a9-dd25b8276930 ']' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 [2024-12-09 05:26:03.767774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:16.953 [2024-12-09 05:26:03.767822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:16.953 [2024-12-09 05:26:03.767916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:16.953 [2024-12-09 05:26:03.767991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:16.953 [2024-12-09 05:26:03.768009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.953 [2024-12-09 05:26:03.915877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:16.953 [2024-12-09 05:26:03.919552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:16.953 [2024-12-09 05:26:03.919701] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:16.953 [2024-12-09 05:26:03.919841] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:16.953 [2024-12-09 05:26:03.919878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:16.953 [2024-12-09 05:26:03.919901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:16.953 request: 00:35:16.953 { 00:35:16.953 "name": "raid_bdev1", 00:35:16.953 "raid_level": "concat", 00:35:16.953 "base_bdevs": [ 00:35:16.953 "malloc1", 00:35:16.953 "malloc2" 00:35:16.953 ], 00:35:16.953 "strip_size_kb": 64, 00:35:16.953 "superblock": false, 00:35:16.953 "method": "bdev_raid_create", 00:35:16.953 "req_id": 1 00:35:16.953 } 00:35:16.953 Got JSON-RPC error response 00:35:16.953 response: 00:35:16.953 { 00:35:16.953 "code": -17, 00:35:16.953 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:16.953 } 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.953 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 [2024-12-09 05:26:03.984212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:17.211 [2024-12-09 05:26:03.984465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.211 [2024-12-09 05:26:03.984549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:17.211 [2024-12-09 05:26:03.984749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.211 [2024-12-09 05:26:03.988583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.211 [2024-12-09 05:26:03.988805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:17.211 [2024-12-09 05:26:03.988949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:17.211 [2024-12-09 05:26:03.989039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:17.211 pt1 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.211 05:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.211 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:17.211 "name": "raid_bdev1", 00:35:17.211 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:17.211 "strip_size_kb": 64, 00:35:17.211 "state": "configuring", 00:35:17.211 "raid_level": "concat", 00:35:17.211 "superblock": true, 00:35:17.211 "num_base_bdevs": 2, 00:35:17.211 "num_base_bdevs_discovered": 1, 00:35:17.211 "num_base_bdevs_operational": 2, 00:35:17.211 "base_bdevs_list": [ 00:35:17.211 { 00:35:17.211 "name": "pt1", 00:35:17.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:17.211 "is_configured": true, 00:35:17.211 "data_offset": 2048, 00:35:17.211 "data_size": 63488 00:35:17.211 }, 00:35:17.211 { 00:35:17.211 "name": null, 00:35:17.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.211 "is_configured": false, 00:35:17.211 "data_offset": 2048, 00:35:17.211 "data_size": 63488 00:35:17.211 } 00:35:17.211 ] 00:35:17.211 }' 00:35:17.211 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:17.211 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.777 [2024-12-09 05:26:04.501211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:17.777 [2024-12-09 05:26:04.501315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.777 [2024-12-09 05:26:04.501347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:17.777 [2024-12-09 05:26:04.501364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.777 [2024-12-09 05:26:04.502023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.777 [2024-12-09 05:26:04.502066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:17.777 [2024-12-09 05:26:04.502173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:17.777 [2024-12-09 05:26:04.502234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:17.777 [2024-12-09 05:26:04.502393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:17.777 [2024-12-09 05:26:04.502421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:17.777 [2024-12-09 05:26:04.502736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:17.777 [2024-12-09 05:26:04.502967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:17.777 [2024-12-09 05:26:04.502984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:17.777 [2024-12-09 05:26:04.503185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:17.777 pt2 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:17.777 "name": "raid_bdev1", 00:35:17.777 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:17.777 "strip_size_kb": 64, 00:35:17.777 "state": "online", 00:35:17.777 "raid_level": "concat", 00:35:17.777 "superblock": true, 00:35:17.777 "num_base_bdevs": 2, 00:35:17.777 "num_base_bdevs_discovered": 2, 00:35:17.777 "num_base_bdevs_operational": 2, 00:35:17.777 "base_bdevs_list": [ 00:35:17.777 { 00:35:17.777 "name": "pt1", 00:35:17.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:17.777 "is_configured": true, 00:35:17.777 "data_offset": 2048, 00:35:17.777 "data_size": 63488 00:35:17.777 }, 00:35:17.777 { 00:35:17.777 "name": "pt2", 00:35:17.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.777 "is_configured": true, 00:35:17.777 "data_offset": 2048, 00:35:17.777 "data_size": 63488 00:35:17.777 } 00:35:17.777 ] 00:35:17.777 }' 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:17.777 05:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.036 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:18.036 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:18.036 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:18.036 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:18.036 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:18.036 05:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:18.036 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:18.036 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:18.036 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.036 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.305 [2024-12-09 05:26:05.009685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:18.305 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.305 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:18.305 "name": "raid_bdev1", 00:35:18.305 "aliases": [ 00:35:18.305 "30bbee84-6264-4e70-86a9-dd25b8276930" 00:35:18.305 ], 00:35:18.305 "product_name": "Raid Volume", 00:35:18.305 "block_size": 512, 00:35:18.305 "num_blocks": 126976, 00:35:18.305 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:18.305 "assigned_rate_limits": { 00:35:18.305 "rw_ios_per_sec": 0, 00:35:18.305 "rw_mbytes_per_sec": 0, 00:35:18.305 "r_mbytes_per_sec": 0, 00:35:18.305 "w_mbytes_per_sec": 0 00:35:18.305 }, 00:35:18.305 "claimed": false, 00:35:18.305 "zoned": false, 00:35:18.305 "supported_io_types": { 00:35:18.305 "read": true, 00:35:18.305 "write": true, 00:35:18.305 "unmap": true, 00:35:18.305 "flush": true, 00:35:18.305 "reset": true, 00:35:18.305 "nvme_admin": false, 00:35:18.305 "nvme_io": false, 00:35:18.305 "nvme_io_md": false, 00:35:18.305 "write_zeroes": true, 00:35:18.305 "zcopy": false, 00:35:18.305 "get_zone_info": false, 00:35:18.305 "zone_management": false, 00:35:18.305 "zone_append": false, 00:35:18.305 "compare": false, 00:35:18.305 "compare_and_write": false, 00:35:18.305 "abort": false, 00:35:18.305 "seek_hole": false, 00:35:18.305 "seek_data": false, 00:35:18.305 "copy": false, 00:35:18.305 "nvme_iov_md": false 00:35:18.305 }, 00:35:18.305 "memory_domains": [ 00:35:18.305 { 00:35:18.305 "dma_device_id": "system", 00:35:18.305 "dma_device_type": 1 00:35:18.305 }, 00:35:18.305 { 00:35:18.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.305 "dma_device_type": 2 00:35:18.305 }, 00:35:18.305 { 00:35:18.305 "dma_device_id": "system", 00:35:18.305 "dma_device_type": 1 00:35:18.305 }, 00:35:18.305 { 00:35:18.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.305 "dma_device_type": 2 00:35:18.305 } 00:35:18.305 ], 00:35:18.305 "driver_specific": { 00:35:18.305 "raid": { 00:35:18.305 "uuid": "30bbee84-6264-4e70-86a9-dd25b8276930", 00:35:18.305 "strip_size_kb": 64, 00:35:18.305 "state": "online", 00:35:18.305 "raid_level": "concat", 00:35:18.305 "superblock": true, 00:35:18.305 "num_base_bdevs": 2, 00:35:18.306 "num_base_bdevs_discovered": 2, 00:35:18.306 "num_base_bdevs_operational": 2, 00:35:18.306 "base_bdevs_list": [ 00:35:18.306 { 00:35:18.306 "name": "pt1", 00:35:18.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:18.306 "is_configured": true, 00:35:18.306 "data_offset": 2048, 00:35:18.306 "data_size": 63488 00:35:18.306 }, 00:35:18.306 { 00:35:18.306 "name": "pt2", 00:35:18.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:18.306 "is_configured": true, 00:35:18.306 "data_offset": 2048, 00:35:18.306 "data_size": 63488 00:35:18.306 } 00:35:18.306 ] 00:35:18.306 } 00:35:18.306 } 00:35:18.306 }' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:18.306 pt2' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.306 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:18.306 [2024-12-09 05:26:05.265623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 30bbee84-6264-4e70-86a9-dd25b8276930 '!=' 30bbee84-6264-4e70-86a9-dd25b8276930 ']' 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62211 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62211 ']' 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62211 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62211 00:35:18.566 killing process with pid 62211 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62211' 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62211 00:35:18.566 [2024-12-09 05:26:05.348613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:18.566 05:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62211 00:35:18.566 [2024-12-09 05:26:05.348720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:18.566 [2024-12-09 05:26:05.348828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:18.566 [2024-12-09 05:26:05.348852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:18.566 [2024-12-09 05:26:05.529016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:19.941 05:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:35:19.941 00:35:19.941 real 0m4.934s 00:35:19.941 user 0m7.189s 00:35:19.941 sys 0m0.749s 00:35:19.941 05:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.941 05:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.941 ************************************ 00:35:19.941 END TEST raid_superblock_test 00:35:19.941 ************************************ 00:35:19.941 05:26:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:35:19.941 05:26:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:19.941 05:26:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.941 05:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:19.941 ************************************ 00:35:19.941 START TEST raid_read_error_test 00:35:19.941 ************************************ 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g0LXnzSV1g 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62428 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62428 00:35:19.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62428 ']' 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.941 05:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:19.941 [2024-12-09 05:26:06.849088] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:19.941 [2024-12-09 05:26:06.849286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62428 ] 00:35:20.200 [2024-12-09 05:26:07.042077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.458 [2024-12-09 05:26:07.177007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.458 [2024-12-09 05:26:07.382230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:20.458 [2024-12-09 05:26:07.382280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.026 BaseBdev1_malloc 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.026 true 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.026 [2024-12-09 05:26:07.800040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:21.026 [2024-12-09 05:26:07.800143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:21.026 [2024-12-09 05:26:07.800173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:21.026 [2024-12-09 05:26:07.800191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:21.026 [2024-12-09 05:26:07.803278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:21.026 [2024-12-09 05:26:07.803339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:21.026 BaseBdev1 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.026 BaseBdev2_malloc 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.026 true 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.026 [2024-12-09 05:26:07.855757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:21.026 [2024-12-09 05:26:07.855861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:21.026 [2024-12-09 05:26:07.855886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:21.026 [2024-12-09 05:26:07.855902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:21.026 [2024-12-09 05:26:07.858956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:21.026 [2024-12-09 05:26:07.859003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:21.026 BaseBdev2 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.026 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.027 [2024-12-09 05:26:07.863983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:21.027 [2024-12-09 05:26:07.866492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:21.027 [2024-12-09 05:26:07.866718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:21.027 [2024-12-09 05:26:07.866739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:21.027 [2024-12-09 05:26:07.867038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:21.027 [2024-12-09 05:26:07.867273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:21.027 [2024-12-09 05:26:07.867294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:21.027 [2024-12-09 05:26:07.867476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:21.027 "name": "raid_bdev1", 00:35:21.027 "uuid": "1f48748c-6cc6-4f86-8ec4-ac6a1c18cc85", 00:35:21.027 "strip_size_kb": 64, 00:35:21.027 "state": "online", 00:35:21.027 "raid_level": "concat", 00:35:21.027 "superblock": true, 00:35:21.027 "num_base_bdevs": 2, 00:35:21.027 "num_base_bdevs_discovered": 2, 00:35:21.027 "num_base_bdevs_operational": 2, 00:35:21.027 "base_bdevs_list": [ 00:35:21.027 { 00:35:21.027 "name": "BaseBdev1", 00:35:21.027 "uuid": "67a13bcf-04a3-5fc7-8f98-87e1b8c41447", 00:35:21.027 "is_configured": true, 00:35:21.027 "data_offset": 2048, 00:35:21.027 "data_size": 63488 00:35:21.027 }, 00:35:21.027 { 00:35:21.027 "name": "BaseBdev2", 00:35:21.027 "uuid": "aa49b4dc-8881-5983-b56f-c250595e6ee0", 00:35:21.027 "is_configured": true, 00:35:21.027 "data_offset": 2048, 00:35:21.027 "data_size": 63488 00:35:21.027 } 00:35:21.027 ] 00:35:21.027 }' 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:21.027 05:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.595 05:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:21.595 05:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:21.595 [2024-12-09 05:26:08.461354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:22.530 "name": "raid_bdev1", 00:35:22.530 "uuid": "1f48748c-6cc6-4f86-8ec4-ac6a1c18cc85", 00:35:22.530 "strip_size_kb": 64, 00:35:22.530 "state": "online", 00:35:22.530 "raid_level": "concat", 00:35:22.530 "superblock": true, 00:35:22.530 "num_base_bdevs": 2, 00:35:22.530 "num_base_bdevs_discovered": 2, 00:35:22.530 "num_base_bdevs_operational": 2, 00:35:22.530 "base_bdevs_list": [ 00:35:22.530 { 00:35:22.530 "name": "BaseBdev1", 00:35:22.530 "uuid": "67a13bcf-04a3-5fc7-8f98-87e1b8c41447", 00:35:22.530 "is_configured": true, 00:35:22.530 "data_offset": 2048, 00:35:22.530 "data_size": 63488 00:35:22.530 }, 00:35:22.530 { 00:35:22.530 "name": "BaseBdev2", 00:35:22.530 "uuid": "aa49b4dc-8881-5983-b56f-c250595e6ee0", 00:35:22.530 "is_configured": true, 00:35:22.530 "data_offset": 2048, 00:35:22.530 "data_size": 63488 00:35:22.530 } 00:35:22.530 ] 00:35:22.530 }' 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:22.530 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.098 [2024-12-09 05:26:09.916674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:23.098 [2024-12-09 05:26:09.916713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:23.098 [2024-12-09 05:26:09.920369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:23.098 [2024-12-09 05:26:09.920422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:23.098 [2024-12-09 05:26:09.920464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:23.098 [2024-12-09 05:26:09.920480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:23.098 { 00:35:23.098 "results": [ 00:35:23.098 { 00:35:23.098 "job": "raid_bdev1", 00:35:23.098 "core_mask": "0x1", 00:35:23.098 "workload": "randrw", 00:35:23.098 "percentage": 50, 00:35:23.098 "status": "finished", 00:35:23.098 "queue_depth": 1, 00:35:23.098 "io_size": 131072, 00:35:23.098 "runtime": 1.453161, 00:35:23.098 "iops": 11713.774316816925, 00:35:23.098 "mibps": 1464.2217896021157, 00:35:23.098 "io_failed": 1, 00:35:23.098 "io_timeout": 0, 00:35:23.098 "avg_latency_us": 118.9912753333725, 00:35:23.098 "min_latency_us": 34.443636363636365, 00:35:23.098 "max_latency_us": 1608.610909090909 00:35:23.098 } 00:35:23.098 ], 00:35:23.098 "core_count": 1 00:35:23.098 } 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62428 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62428 ']' 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62428 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62428 00:35:23.098 killing process with pid 62428 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62428' 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62428 00:35:23.098 [2024-12-09 05:26:09.960243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:23.098 05:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62428 00:35:23.356 [2024-12-09 05:26:10.078514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g0LXnzSV1g 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:35:24.288 00:35:24.288 real 0m4.509s 00:35:24.288 user 0m5.500s 00:35:24.288 sys 0m0.631s 00:35:24.288 ************************************ 00:35:24.288 END TEST raid_read_error_test 00:35:24.288 ************************************ 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.288 05:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.547 05:26:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:35:24.547 05:26:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:24.547 05:26:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.547 05:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:24.547 ************************************ 00:35:24.547 START TEST raid_write_error_test 00:35:24.547 ************************************ 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MIXI2Glcqa 00:35:24.547 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62568 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62568 00:35:24.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62568 ']' 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.548 05:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.548 [2024-12-09 05:26:11.411006] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:24.548 [2024-12-09 05:26:11.411508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62568 ] 00:35:24.806 [2024-12-09 05:26:11.594705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.806 [2024-12-09 05:26:11.720505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.063 [2024-12-09 05:26:11.931827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:25.063 [2024-12-09 05:26:11.931868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 BaseBdev1_malloc 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 true 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 [2024-12-09 05:26:12.370798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:25.630 [2024-12-09 05:26:12.370906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:25.630 [2024-12-09 05:26:12.370943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:25.630 [2024-12-09 05:26:12.370960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:25.630 [2024-12-09 05:26:12.373598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:25.630 [2024-12-09 05:26:12.373822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:25.630 BaseBdev1 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 BaseBdev2_malloc 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 true 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 [2024-12-09 05:26:12.434368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:25.630 [2024-12-09 05:26:12.434456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:25.630 [2024-12-09 05:26:12.434477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:25.630 [2024-12-09 05:26:12.434492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:25.630 [2024-12-09 05:26:12.437075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:25.630 [2024-12-09 05:26:12.437118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:25.630 BaseBdev2 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 [2024-12-09 05:26:12.442448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:25.630 [2024-12-09 05:26:12.444756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:25.630 [2024-12-09 05:26:12.445162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:25.630 [2024-12-09 05:26:12.445329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:25.630 [2024-12-09 05:26:12.445643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:25.630 [2024-12-09 05:26:12.445877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:25.630 [2024-12-09 05:26:12.445898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:25.630 [2024-12-09 05:26:12.446093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.630 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:25.630 "name": "raid_bdev1", 00:35:25.630 "uuid": "58bbe9b1-682d-42df-8175-4c08a4c81b21", 00:35:25.630 "strip_size_kb": 64, 00:35:25.630 "state": "online", 00:35:25.630 "raid_level": "concat", 00:35:25.630 "superblock": true, 00:35:25.630 "num_base_bdevs": 2, 00:35:25.630 "num_base_bdevs_discovered": 2, 00:35:25.630 "num_base_bdevs_operational": 2, 00:35:25.630 "base_bdevs_list": [ 00:35:25.630 { 00:35:25.630 "name": "BaseBdev1", 00:35:25.630 "uuid": "09da5b84-7cec-57f2-b3cf-83476384b34b", 00:35:25.630 "is_configured": true, 00:35:25.630 "data_offset": 2048, 00:35:25.630 "data_size": 63488 00:35:25.630 }, 00:35:25.630 { 00:35:25.630 "name": "BaseBdev2", 00:35:25.630 "uuid": "3eb52c83-4eb2-5566-98b1-e1e2e786c8d9", 00:35:25.630 "is_configured": true, 00:35:25.630 "data_offset": 2048, 00:35:25.631 "data_size": 63488 00:35:25.631 } 00:35:25.631 ] 00:35:25.631 }' 00:35:25.631 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:25.631 05:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:26.197 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:26.197 05:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:26.197 [2024-12-09 05:26:13.083976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.131 05:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.131 05:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:27.131 "name": "raid_bdev1", 00:35:27.131 "uuid": "58bbe9b1-682d-42df-8175-4c08a4c81b21", 00:35:27.131 "strip_size_kb": 64, 00:35:27.131 "state": "online", 00:35:27.131 "raid_level": "concat", 00:35:27.131 "superblock": true, 00:35:27.131 "num_base_bdevs": 2, 00:35:27.131 "num_base_bdevs_discovered": 2, 00:35:27.131 "num_base_bdevs_operational": 2, 00:35:27.131 "base_bdevs_list": [ 00:35:27.131 { 00:35:27.131 "name": "BaseBdev1", 00:35:27.131 "uuid": "09da5b84-7cec-57f2-b3cf-83476384b34b", 00:35:27.131 "is_configured": true, 00:35:27.131 "data_offset": 2048, 00:35:27.131 "data_size": 63488 00:35:27.131 }, 00:35:27.131 { 00:35:27.131 "name": "BaseBdev2", 00:35:27.131 "uuid": "3eb52c83-4eb2-5566-98b1-e1e2e786c8d9", 00:35:27.132 "is_configured": true, 00:35:27.132 "data_offset": 2048, 00:35:27.132 "data_size": 63488 00:35:27.132 } 00:35:27.132 ] 00:35:27.132 }' 00:35:27.132 05:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:27.132 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.697 [2024-12-09 05:26:14.509901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:27.697 [2024-12-09 05:26:14.509972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:27.697 [2024-12-09 05:26:14.513140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:27.697 [2024-12-09 05:26:14.513376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:27.697 [2024-12-09 05:26:14.513434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:27.697 [2024-12-09 05:26:14.513457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:27.697 { 00:35:27.697 "results": [ 00:35:27.697 { 00:35:27.697 "job": "raid_bdev1", 00:35:27.697 "core_mask": "0x1", 00:35:27.697 "workload": "randrw", 00:35:27.697 "percentage": 50, 00:35:27.697 "status": "finished", 00:35:27.697 "queue_depth": 1, 00:35:27.697 "io_size": 131072, 00:35:27.697 "runtime": 1.4237, 00:35:27.697 "iops": 12026.4100582988, 00:35:27.697 "mibps": 1503.30125728735, 00:35:27.697 "io_failed": 1, 00:35:27.697 "io_timeout": 0, 00:35:27.697 "avg_latency_us": 116.1705758867658, 00:35:27.697 "min_latency_us": 34.443636363636365, 00:35:27.697 "max_latency_us": 1690.530909090909 00:35:27.697 } 00:35:27.697 ], 00:35:27.697 "core_count": 1 00:35:27.697 } 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62568 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62568 ']' 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62568 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62568 00:35:27.697 killing process with pid 62568 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62568' 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62568 00:35:27.697 [2024-12-09 05:26:14.552003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:27.697 05:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62568 00:35:27.697 [2024-12-09 05:26:14.656987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MIXI2Glcqa 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:29.071 ************************************ 00:35:29.071 END TEST raid_write_error_test 00:35:29.071 ************************************ 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:35:29.071 00:35:29.071 real 0m4.555s 00:35:29.071 user 0m5.604s 00:35:29.071 sys 0m0.616s 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.071 05:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.071 05:26:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:35:29.071 05:26:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:35:29.071 05:26:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:29.071 05:26:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.071 05:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:29.071 ************************************ 00:35:29.072 START TEST raid_state_function_test 00:35:29.072 ************************************ 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:35:29.072 Process raid pid: 62712 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62712 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62712' 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62712 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62712 ']' 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.072 05:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.072 [2024-12-09 05:26:16.020648] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:29.072 [2024-12-09 05:26:16.020872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.330 [2024-12-09 05:26:16.207829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.588 [2024-12-09 05:26:16.340112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.588 [2024-12-09 05:26:16.550530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:29.588 [2024-12-09 05:26:16.550576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.174 [2024-12-09 05:26:17.042431] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:30.174 [2024-12-09 05:26:17.042671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:30.174 [2024-12-09 05:26:17.042700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:30.174 [2024-12-09 05:26:17.042717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:30.174 "name": "Existed_Raid", 00:35:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.174 "strip_size_kb": 0, 00:35:30.174 "state": "configuring", 00:35:30.174 "raid_level": "raid1", 00:35:30.174 "superblock": false, 00:35:30.174 "num_base_bdevs": 2, 00:35:30.174 "num_base_bdevs_discovered": 0, 00:35:30.174 "num_base_bdevs_operational": 2, 00:35:30.174 "base_bdevs_list": [ 00:35:30.174 { 00:35:30.174 "name": "BaseBdev1", 00:35:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.174 "is_configured": false, 00:35:30.174 "data_offset": 0, 00:35:30.174 "data_size": 0 00:35:30.174 }, 00:35:30.174 { 00:35:30.174 "name": "BaseBdev2", 00:35:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.174 "is_configured": false, 00:35:30.174 "data_offset": 0, 00:35:30.174 "data_size": 0 00:35:30.174 } 00:35:30.174 ] 00:35:30.174 }' 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:30.174 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.742 [2024-12-09 05:26:17.550618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:30.742 [2024-12-09 05:26:17.550659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.742 [2024-12-09 05:26:17.558590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:30.742 [2024-12-09 05:26:17.558827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:30.742 [2024-12-09 05:26:17.558953] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:30.742 [2024-12-09 05:26:17.559118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.742 [2024-12-09 05:26:17.604239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:30.742 BaseBdev1 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.742 [ 00:35:30.742 { 00:35:30.742 "name": "BaseBdev1", 00:35:30.742 "aliases": [ 00:35:30.742 "e9b40f65-d0f3-4e75-92b2-c79044ee8242" 00:35:30.742 ], 00:35:30.742 "product_name": "Malloc disk", 00:35:30.742 "block_size": 512, 00:35:30.742 "num_blocks": 65536, 00:35:30.742 "uuid": "e9b40f65-d0f3-4e75-92b2-c79044ee8242", 00:35:30.742 "assigned_rate_limits": { 00:35:30.742 "rw_ios_per_sec": 0, 00:35:30.742 "rw_mbytes_per_sec": 0, 00:35:30.742 "r_mbytes_per_sec": 0, 00:35:30.742 "w_mbytes_per_sec": 0 00:35:30.742 }, 00:35:30.742 "claimed": true, 00:35:30.742 "claim_type": "exclusive_write", 00:35:30.742 "zoned": false, 00:35:30.742 "supported_io_types": { 00:35:30.742 "read": true, 00:35:30.742 "write": true, 00:35:30.742 "unmap": true, 00:35:30.742 "flush": true, 00:35:30.742 "reset": true, 00:35:30.742 "nvme_admin": false, 00:35:30.742 "nvme_io": false, 00:35:30.742 "nvme_io_md": false, 00:35:30.742 "write_zeroes": true, 00:35:30.742 "zcopy": true, 00:35:30.742 "get_zone_info": false, 00:35:30.742 "zone_management": false, 00:35:30.742 "zone_append": false, 00:35:30.742 "compare": false, 00:35:30.742 "compare_and_write": false, 00:35:30.742 "abort": true, 00:35:30.742 "seek_hole": false, 00:35:30.742 "seek_data": false, 00:35:30.742 "copy": true, 00:35:30.742 "nvme_iov_md": false 00:35:30.742 }, 00:35:30.742 "memory_domains": [ 00:35:30.742 { 00:35:30.742 "dma_device_id": "system", 00:35:30.742 "dma_device_type": 1 00:35:30.742 }, 00:35:30.742 { 00:35:30.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:30.742 "dma_device_type": 2 00:35:30.742 } 00:35:30.742 ], 00:35:30.742 "driver_specific": {} 00:35:30.742 } 00:35:30.742 ] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:30.742 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:30.743 "name": "Existed_Raid", 00:35:30.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.743 "strip_size_kb": 0, 00:35:30.743 "state": "configuring", 00:35:30.743 "raid_level": "raid1", 00:35:30.743 "superblock": false, 00:35:30.743 "num_base_bdevs": 2, 00:35:30.743 "num_base_bdevs_discovered": 1, 00:35:30.743 "num_base_bdevs_operational": 2, 00:35:30.743 "base_bdevs_list": [ 00:35:30.743 { 00:35:30.743 "name": "BaseBdev1", 00:35:30.743 "uuid": "e9b40f65-d0f3-4e75-92b2-c79044ee8242", 00:35:30.743 "is_configured": true, 00:35:30.743 "data_offset": 0, 00:35:30.743 "data_size": 65536 00:35:30.743 }, 00:35:30.743 { 00:35:30.743 "name": "BaseBdev2", 00:35:30.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.743 "is_configured": false, 00:35:30.743 "data_offset": 0, 00:35:30.743 "data_size": 0 00:35:30.743 } 00:35:30.743 ] 00:35:30.743 }' 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:30.743 05:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.336 [2024-12-09 05:26:18.148413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:31.336 [2024-12-09 05:26:18.148461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.336 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.336 [2024-12-09 05:26:18.160499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:31.337 [2024-12-09 05:26:18.163399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:31.337 [2024-12-09 05:26:18.163626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:31.337 "name": "Existed_Raid", 00:35:31.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.337 "strip_size_kb": 0, 00:35:31.337 "state": "configuring", 00:35:31.337 "raid_level": "raid1", 00:35:31.337 "superblock": false, 00:35:31.337 "num_base_bdevs": 2, 00:35:31.337 "num_base_bdevs_discovered": 1, 00:35:31.337 "num_base_bdevs_operational": 2, 00:35:31.337 "base_bdevs_list": [ 00:35:31.337 { 00:35:31.337 "name": "BaseBdev1", 00:35:31.337 "uuid": "e9b40f65-d0f3-4e75-92b2-c79044ee8242", 00:35:31.337 "is_configured": true, 00:35:31.337 "data_offset": 0, 00:35:31.337 "data_size": 65536 00:35:31.337 }, 00:35:31.337 { 00:35:31.337 "name": "BaseBdev2", 00:35:31.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.337 "is_configured": false, 00:35:31.337 "data_offset": 0, 00:35:31.337 "data_size": 0 00:35:31.337 } 00:35:31.337 ] 00:35:31.337 }' 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:31.337 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.904 [2024-12-09 05:26:18.702623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:31.904 [2024-12-09 05:26:18.702705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:31.904 [2024-12-09 05:26:18.702732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:35:31.904 [2024-12-09 05:26:18.703190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:31.904 [2024-12-09 05:26:18.703544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:31.904 [2024-12-09 05:26:18.703574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:31.904 [2024-12-09 05:26:18.703951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:31.904 BaseBdev2 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:31.904 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.905 [ 00:35:31.905 { 00:35:31.905 "name": "BaseBdev2", 00:35:31.905 "aliases": [ 00:35:31.905 "d8fd3505-f0c5-4588-bac6-8e658e67c896" 00:35:31.905 ], 00:35:31.905 "product_name": "Malloc disk", 00:35:31.905 "block_size": 512, 00:35:31.905 "num_blocks": 65536, 00:35:31.905 "uuid": "d8fd3505-f0c5-4588-bac6-8e658e67c896", 00:35:31.905 "assigned_rate_limits": { 00:35:31.905 "rw_ios_per_sec": 0, 00:35:31.905 "rw_mbytes_per_sec": 0, 00:35:31.905 "r_mbytes_per_sec": 0, 00:35:31.905 "w_mbytes_per_sec": 0 00:35:31.905 }, 00:35:31.905 "claimed": true, 00:35:31.905 "claim_type": "exclusive_write", 00:35:31.905 "zoned": false, 00:35:31.905 "supported_io_types": { 00:35:31.905 "read": true, 00:35:31.905 "write": true, 00:35:31.905 "unmap": true, 00:35:31.905 "flush": true, 00:35:31.905 "reset": true, 00:35:31.905 "nvme_admin": false, 00:35:31.905 "nvme_io": false, 00:35:31.905 "nvme_io_md": false, 00:35:31.905 "write_zeroes": true, 00:35:31.905 "zcopy": true, 00:35:31.905 "get_zone_info": false, 00:35:31.905 "zone_management": false, 00:35:31.905 "zone_append": false, 00:35:31.905 "compare": false, 00:35:31.905 "compare_and_write": false, 00:35:31.905 "abort": true, 00:35:31.905 "seek_hole": false, 00:35:31.905 "seek_data": false, 00:35:31.905 "copy": true, 00:35:31.905 "nvme_iov_md": false 00:35:31.905 }, 00:35:31.905 "memory_domains": [ 00:35:31.905 { 00:35:31.905 "dma_device_id": "system", 00:35:31.905 "dma_device_type": 1 00:35:31.905 }, 00:35:31.905 { 00:35:31.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:31.905 "dma_device_type": 2 00:35:31.905 } 00:35:31.905 ], 00:35:31.905 "driver_specific": {} 00:35:31.905 } 00:35:31.905 ] 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:31.905 "name": "Existed_Raid", 00:35:31.905 "uuid": "bc026970-0462-448d-aca5-db1800d5ab71", 00:35:31.905 "strip_size_kb": 0, 00:35:31.905 "state": "online", 00:35:31.905 "raid_level": "raid1", 00:35:31.905 "superblock": false, 00:35:31.905 "num_base_bdevs": 2, 00:35:31.905 "num_base_bdevs_discovered": 2, 00:35:31.905 "num_base_bdevs_operational": 2, 00:35:31.905 "base_bdevs_list": [ 00:35:31.905 { 00:35:31.905 "name": "BaseBdev1", 00:35:31.905 "uuid": "e9b40f65-d0f3-4e75-92b2-c79044ee8242", 00:35:31.905 "is_configured": true, 00:35:31.905 "data_offset": 0, 00:35:31.905 "data_size": 65536 00:35:31.905 }, 00:35:31.905 { 00:35:31.905 "name": "BaseBdev2", 00:35:31.905 "uuid": "d8fd3505-f0c5-4588-bac6-8e658e67c896", 00:35:31.905 "is_configured": true, 00:35:31.905 "data_offset": 0, 00:35:31.905 "data_size": 65536 00:35:31.905 } 00:35:31.905 ] 00:35:31.905 }' 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:31.905 05:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.472 [2024-12-09 05:26:19.271150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:32.472 "name": "Existed_Raid", 00:35:32.472 "aliases": [ 00:35:32.472 "bc026970-0462-448d-aca5-db1800d5ab71" 00:35:32.472 ], 00:35:32.472 "product_name": "Raid Volume", 00:35:32.472 "block_size": 512, 00:35:32.472 "num_blocks": 65536, 00:35:32.472 "uuid": "bc026970-0462-448d-aca5-db1800d5ab71", 00:35:32.472 "assigned_rate_limits": { 00:35:32.472 "rw_ios_per_sec": 0, 00:35:32.472 "rw_mbytes_per_sec": 0, 00:35:32.472 "r_mbytes_per_sec": 0, 00:35:32.472 "w_mbytes_per_sec": 0 00:35:32.472 }, 00:35:32.472 "claimed": false, 00:35:32.472 "zoned": false, 00:35:32.472 "supported_io_types": { 00:35:32.472 "read": true, 00:35:32.472 "write": true, 00:35:32.472 "unmap": false, 00:35:32.472 "flush": false, 00:35:32.472 "reset": true, 00:35:32.472 "nvme_admin": false, 00:35:32.472 "nvme_io": false, 00:35:32.472 "nvme_io_md": false, 00:35:32.472 "write_zeroes": true, 00:35:32.472 "zcopy": false, 00:35:32.472 "get_zone_info": false, 00:35:32.472 "zone_management": false, 00:35:32.472 "zone_append": false, 00:35:32.472 "compare": false, 00:35:32.472 "compare_and_write": false, 00:35:32.472 "abort": false, 00:35:32.472 "seek_hole": false, 00:35:32.472 "seek_data": false, 00:35:32.472 "copy": false, 00:35:32.472 "nvme_iov_md": false 00:35:32.472 }, 00:35:32.472 "memory_domains": [ 00:35:32.472 { 00:35:32.472 "dma_device_id": "system", 00:35:32.472 "dma_device_type": 1 00:35:32.472 }, 00:35:32.472 { 00:35:32.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:32.472 "dma_device_type": 2 00:35:32.472 }, 00:35:32.472 { 00:35:32.472 "dma_device_id": "system", 00:35:32.472 "dma_device_type": 1 00:35:32.472 }, 00:35:32.472 { 00:35:32.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:32.472 "dma_device_type": 2 00:35:32.472 } 00:35:32.472 ], 00:35:32.472 "driver_specific": { 00:35:32.472 "raid": { 00:35:32.472 "uuid": "bc026970-0462-448d-aca5-db1800d5ab71", 00:35:32.472 "strip_size_kb": 0, 00:35:32.472 "state": "online", 00:35:32.472 "raid_level": "raid1", 00:35:32.472 "superblock": false, 00:35:32.472 "num_base_bdevs": 2, 00:35:32.472 "num_base_bdevs_discovered": 2, 00:35:32.472 "num_base_bdevs_operational": 2, 00:35:32.472 "base_bdevs_list": [ 00:35:32.472 { 00:35:32.472 "name": "BaseBdev1", 00:35:32.472 "uuid": "e9b40f65-d0f3-4e75-92b2-c79044ee8242", 00:35:32.472 "is_configured": true, 00:35:32.472 "data_offset": 0, 00:35:32.472 "data_size": 65536 00:35:32.472 }, 00:35:32.472 { 00:35:32.472 "name": "BaseBdev2", 00:35:32.472 "uuid": "d8fd3505-f0c5-4588-bac6-8e658e67c896", 00:35:32.472 "is_configured": true, 00:35:32.472 "data_offset": 0, 00:35:32.472 "data_size": 65536 00:35:32.472 } 00:35:32.472 ] 00:35:32.472 } 00:35:32.472 } 00:35:32.472 }' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:32.472 BaseBdev2' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.472 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.730 [2024-12-09 05:26:19.534944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:32.730 "name": "Existed_Raid", 00:35:32.730 "uuid": "bc026970-0462-448d-aca5-db1800d5ab71", 00:35:32.730 "strip_size_kb": 0, 00:35:32.730 "state": "online", 00:35:32.730 "raid_level": "raid1", 00:35:32.730 "superblock": false, 00:35:32.730 "num_base_bdevs": 2, 00:35:32.730 "num_base_bdevs_discovered": 1, 00:35:32.730 "num_base_bdevs_operational": 1, 00:35:32.730 "base_bdevs_list": [ 00:35:32.730 { 00:35:32.730 "name": null, 00:35:32.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.730 "is_configured": false, 00:35:32.730 "data_offset": 0, 00:35:32.730 "data_size": 65536 00:35:32.730 }, 00:35:32.730 { 00:35:32.730 "name": "BaseBdev2", 00:35:32.730 "uuid": "d8fd3505-f0c5-4588-bac6-8e658e67c896", 00:35:32.730 "is_configured": true, 00:35:32.730 "data_offset": 0, 00:35:32.730 "data_size": 65536 00:35:32.730 } 00:35:32.730 ] 00:35:32.730 }' 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:32.730 05:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.297 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.297 [2024-12-09 05:26:20.192409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:33.297 [2024-12-09 05:26:20.192683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:33.555 [2024-12-09 05:26:20.277408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:33.555 [2024-12-09 05:26:20.277473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:33.555 [2024-12-09 05:26:20.277500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:33.555 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62712 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62712 ']' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62712 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62712 00:35:33.556 killing process with pid 62712 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62712' 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62712 00:35:33.556 [2024-12-09 05:26:20.369719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:33.556 05:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62712 00:35:33.556 [2024-12-09 05:26:20.383814] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:34.929 00:35:34.929 real 0m5.595s 00:35:34.929 user 0m8.375s 00:35:34.929 sys 0m0.855s 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.929 ************************************ 00:35:34.929 END TEST raid_state_function_test 00:35:34.929 ************************************ 00:35:34.929 05:26:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:35:34.929 05:26:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:34.929 05:26:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.929 05:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:34.929 ************************************ 00:35:34.929 START TEST raid_state_function_test_sb 00:35:34.929 ************************************ 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62969 00:35:34.929 Process raid pid: 62969 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62969' 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:34.929 05:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62969 00:35:34.930 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62969 ']' 00:35:34.930 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.930 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.930 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.930 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.930 05:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:34.930 [2024-12-09 05:26:21.672078] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:34.930 [2024-12-09 05:26:21.672296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.930 [2024-12-09 05:26:21.860315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.187 [2024-12-09 05:26:21.995357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.445 [2024-12-09 05:26:22.201399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:35.445 [2024-12-09 05:26:22.201457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:35.702 [2024-12-09 05:26:22.595277] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:35.702 [2024-12-09 05:26:22.595360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:35.702 [2024-12-09 05:26:22.595376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:35.702 [2024-12-09 05:26:22.595391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.702 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.702 "name": "Existed_Raid", 00:35:35.702 "uuid": "4c333235-7747-4ecf-b198-4fbf1ce23234", 00:35:35.702 "strip_size_kb": 0, 00:35:35.702 "state": "configuring", 00:35:35.702 "raid_level": "raid1", 00:35:35.702 "superblock": true, 00:35:35.702 "num_base_bdevs": 2, 00:35:35.702 "num_base_bdevs_discovered": 0, 00:35:35.702 "num_base_bdevs_operational": 2, 00:35:35.702 "base_bdevs_list": [ 00:35:35.702 { 00:35:35.702 "name": "BaseBdev1", 00:35:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.702 "is_configured": false, 00:35:35.702 "data_offset": 0, 00:35:35.702 "data_size": 0 00:35:35.702 }, 00:35:35.702 { 00:35:35.702 "name": "BaseBdev2", 00:35:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.702 "is_configured": false, 00:35:35.702 "data_offset": 0, 00:35:35.702 "data_size": 0 00:35:35.702 } 00:35:35.702 ] 00:35:35.702 }' 00:35:35.703 05:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.703 05:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 [2024-12-09 05:26:23.127286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:36.268 [2024-12-09 05:26:23.127320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 [2024-12-09 05:26:23.135280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:36.268 [2024-12-09 05:26:23.135348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:36.268 [2024-12-09 05:26:23.135360] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:36.268 [2024-12-09 05:26:23.135377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 [2024-12-09 05:26:23.180019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:36.268 BaseBdev1 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 [ 00:35:36.268 { 00:35:36.268 "name": "BaseBdev1", 00:35:36.268 "aliases": [ 00:35:36.268 "d1645b76-d583-477f-9dc0-744ae92ba1d0" 00:35:36.268 ], 00:35:36.268 "product_name": "Malloc disk", 00:35:36.268 "block_size": 512, 00:35:36.268 "num_blocks": 65536, 00:35:36.268 "uuid": "d1645b76-d583-477f-9dc0-744ae92ba1d0", 00:35:36.268 "assigned_rate_limits": { 00:35:36.268 "rw_ios_per_sec": 0, 00:35:36.268 "rw_mbytes_per_sec": 0, 00:35:36.268 "r_mbytes_per_sec": 0, 00:35:36.268 "w_mbytes_per_sec": 0 00:35:36.268 }, 00:35:36.268 "claimed": true, 00:35:36.268 "claim_type": "exclusive_write", 00:35:36.268 "zoned": false, 00:35:36.268 "supported_io_types": { 00:35:36.268 "read": true, 00:35:36.268 "write": true, 00:35:36.268 "unmap": true, 00:35:36.268 "flush": true, 00:35:36.268 "reset": true, 00:35:36.268 "nvme_admin": false, 00:35:36.268 "nvme_io": false, 00:35:36.268 "nvme_io_md": false, 00:35:36.268 "write_zeroes": true, 00:35:36.268 "zcopy": true, 00:35:36.268 "get_zone_info": false, 00:35:36.268 "zone_management": false, 00:35:36.268 "zone_append": false, 00:35:36.268 "compare": false, 00:35:36.268 "compare_and_write": false, 00:35:36.268 "abort": true, 00:35:36.268 "seek_hole": false, 00:35:36.268 "seek_data": false, 00:35:36.268 "copy": true, 00:35:36.268 "nvme_iov_md": false 00:35:36.268 }, 00:35:36.268 "memory_domains": [ 00:35:36.268 { 00:35:36.268 "dma_device_id": "system", 00:35:36.268 "dma_device_type": 1 00:35:36.268 }, 00:35:36.268 { 00:35:36.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.268 "dma_device_type": 2 00:35:36.268 } 00:35:36.268 ], 00:35:36.268 "driver_specific": {} 00:35:36.268 } 00:35:36.268 ] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.268 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.526 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:36.526 "name": "Existed_Raid", 00:35:36.526 "uuid": "e979186d-411d-4b42-bf9f-2d96541f4d21", 00:35:36.526 "strip_size_kb": 0, 00:35:36.526 "state": "configuring", 00:35:36.526 "raid_level": "raid1", 00:35:36.526 "superblock": true, 00:35:36.526 "num_base_bdevs": 2, 00:35:36.526 "num_base_bdevs_discovered": 1, 00:35:36.526 "num_base_bdevs_operational": 2, 00:35:36.526 "base_bdevs_list": [ 00:35:36.526 { 00:35:36.526 "name": "BaseBdev1", 00:35:36.526 "uuid": "d1645b76-d583-477f-9dc0-744ae92ba1d0", 00:35:36.526 "is_configured": true, 00:35:36.526 "data_offset": 2048, 00:35:36.526 "data_size": 63488 00:35:36.526 }, 00:35:36.526 { 00:35:36.526 "name": "BaseBdev2", 00:35:36.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.526 "is_configured": false, 00:35:36.526 "data_offset": 0, 00:35:36.526 "data_size": 0 00:35:36.526 } 00:35:36.526 ] 00:35:36.526 }' 00:35:36.526 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:36.526 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.784 [2024-12-09 05:26:23.744245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:36.784 [2024-12-09 05:26:23.744312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.784 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.784 [2024-12-09 05:26:23.752321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:36.784 [2024-12-09 05:26:23.755022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:36.784 [2024-12-09 05:26:23.755076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:37.042 "name": "Existed_Raid", 00:35:37.042 "uuid": "51116c26-64c4-4925-9d84-8856564bc365", 00:35:37.042 "strip_size_kb": 0, 00:35:37.042 "state": "configuring", 00:35:37.042 "raid_level": "raid1", 00:35:37.042 "superblock": true, 00:35:37.042 "num_base_bdevs": 2, 00:35:37.042 "num_base_bdevs_discovered": 1, 00:35:37.042 "num_base_bdevs_operational": 2, 00:35:37.042 "base_bdevs_list": [ 00:35:37.042 { 00:35:37.042 "name": "BaseBdev1", 00:35:37.042 "uuid": "d1645b76-d583-477f-9dc0-744ae92ba1d0", 00:35:37.042 "is_configured": true, 00:35:37.042 "data_offset": 2048, 00:35:37.042 "data_size": 63488 00:35:37.042 }, 00:35:37.042 { 00:35:37.042 "name": "BaseBdev2", 00:35:37.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.042 "is_configured": false, 00:35:37.042 "data_offset": 0, 00:35:37.042 "data_size": 0 00:35:37.042 } 00:35:37.042 ] 00:35:37.042 }' 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:37.042 05:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.609 [2024-12-09 05:26:24.323419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:37.609 [2024-12-09 05:26:24.323696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:37.609 [2024-12-09 05:26:24.323714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:37.609 [2024-12-09 05:26:24.324048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:37.609 BaseBdev2 00:35:37.609 [2024-12-09 05:26:24.324236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:37.609 [2024-12-09 05:26:24.324257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:37.609 [2024-12-09 05:26:24.324413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.609 [ 00:35:37.609 { 00:35:37.609 "name": "BaseBdev2", 00:35:37.609 "aliases": [ 00:35:37.609 "14488879-9bf9-4338-817b-cb901ed3d649" 00:35:37.609 ], 00:35:37.609 "product_name": "Malloc disk", 00:35:37.609 "block_size": 512, 00:35:37.609 "num_blocks": 65536, 00:35:37.609 "uuid": "14488879-9bf9-4338-817b-cb901ed3d649", 00:35:37.609 "assigned_rate_limits": { 00:35:37.609 "rw_ios_per_sec": 0, 00:35:37.609 "rw_mbytes_per_sec": 0, 00:35:37.609 "r_mbytes_per_sec": 0, 00:35:37.609 "w_mbytes_per_sec": 0 00:35:37.609 }, 00:35:37.609 "claimed": true, 00:35:37.609 "claim_type": "exclusive_write", 00:35:37.609 "zoned": false, 00:35:37.609 "supported_io_types": { 00:35:37.609 "read": true, 00:35:37.609 "write": true, 00:35:37.609 "unmap": true, 00:35:37.609 "flush": true, 00:35:37.609 "reset": true, 00:35:37.609 "nvme_admin": false, 00:35:37.609 "nvme_io": false, 00:35:37.609 "nvme_io_md": false, 00:35:37.609 "write_zeroes": true, 00:35:37.609 "zcopy": true, 00:35:37.609 "get_zone_info": false, 00:35:37.609 "zone_management": false, 00:35:37.609 "zone_append": false, 00:35:37.609 "compare": false, 00:35:37.609 "compare_and_write": false, 00:35:37.609 "abort": true, 00:35:37.609 "seek_hole": false, 00:35:37.609 "seek_data": false, 00:35:37.609 "copy": true, 00:35:37.609 "nvme_iov_md": false 00:35:37.609 }, 00:35:37.609 "memory_domains": [ 00:35:37.609 { 00:35:37.609 "dma_device_id": "system", 00:35:37.609 "dma_device_type": 1 00:35:37.609 }, 00:35:37.609 { 00:35:37.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:37.609 "dma_device_type": 2 00:35:37.609 } 00:35:37.609 ], 00:35:37.609 "driver_specific": {} 00:35:37.609 } 00:35:37.609 ] 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.609 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.610 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:37.610 "name": "Existed_Raid", 00:35:37.610 "uuid": "51116c26-64c4-4925-9d84-8856564bc365", 00:35:37.610 "strip_size_kb": 0, 00:35:37.610 "state": "online", 00:35:37.610 "raid_level": "raid1", 00:35:37.610 "superblock": true, 00:35:37.610 "num_base_bdevs": 2, 00:35:37.610 "num_base_bdevs_discovered": 2, 00:35:37.610 "num_base_bdevs_operational": 2, 00:35:37.610 "base_bdevs_list": [ 00:35:37.610 { 00:35:37.610 "name": "BaseBdev1", 00:35:37.610 "uuid": "d1645b76-d583-477f-9dc0-744ae92ba1d0", 00:35:37.610 "is_configured": true, 00:35:37.610 "data_offset": 2048, 00:35:37.610 "data_size": 63488 00:35:37.610 }, 00:35:37.610 { 00:35:37.610 "name": "BaseBdev2", 00:35:37.610 "uuid": "14488879-9bf9-4338-817b-cb901ed3d649", 00:35:37.610 "is_configured": true, 00:35:37.610 "data_offset": 2048, 00:35:37.610 "data_size": 63488 00:35:37.610 } 00:35:37.610 ] 00:35:37.610 }' 00:35:37.610 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:37.610 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.176 [2024-12-09 05:26:24.884012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.176 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:38.176 "name": "Existed_Raid", 00:35:38.176 "aliases": [ 00:35:38.176 "51116c26-64c4-4925-9d84-8856564bc365" 00:35:38.176 ], 00:35:38.176 "product_name": "Raid Volume", 00:35:38.176 "block_size": 512, 00:35:38.176 "num_blocks": 63488, 00:35:38.176 "uuid": "51116c26-64c4-4925-9d84-8856564bc365", 00:35:38.176 "assigned_rate_limits": { 00:35:38.176 "rw_ios_per_sec": 0, 00:35:38.176 "rw_mbytes_per_sec": 0, 00:35:38.176 "r_mbytes_per_sec": 0, 00:35:38.176 "w_mbytes_per_sec": 0 00:35:38.176 }, 00:35:38.176 "claimed": false, 00:35:38.176 "zoned": false, 00:35:38.176 "supported_io_types": { 00:35:38.176 "read": true, 00:35:38.176 "write": true, 00:35:38.176 "unmap": false, 00:35:38.176 "flush": false, 00:35:38.176 "reset": true, 00:35:38.176 "nvme_admin": false, 00:35:38.176 "nvme_io": false, 00:35:38.176 "nvme_io_md": false, 00:35:38.176 "write_zeroes": true, 00:35:38.176 "zcopy": false, 00:35:38.176 "get_zone_info": false, 00:35:38.176 "zone_management": false, 00:35:38.176 "zone_append": false, 00:35:38.177 "compare": false, 00:35:38.177 "compare_and_write": false, 00:35:38.177 "abort": false, 00:35:38.177 "seek_hole": false, 00:35:38.177 "seek_data": false, 00:35:38.177 "copy": false, 00:35:38.177 "nvme_iov_md": false 00:35:38.177 }, 00:35:38.177 "memory_domains": [ 00:35:38.177 { 00:35:38.177 "dma_device_id": "system", 00:35:38.177 "dma_device_type": 1 00:35:38.177 }, 00:35:38.177 { 00:35:38.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.177 "dma_device_type": 2 00:35:38.177 }, 00:35:38.177 { 00:35:38.177 "dma_device_id": "system", 00:35:38.177 "dma_device_type": 1 00:35:38.177 }, 00:35:38.177 { 00:35:38.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.177 "dma_device_type": 2 00:35:38.177 } 00:35:38.177 ], 00:35:38.177 "driver_specific": { 00:35:38.177 "raid": { 00:35:38.177 "uuid": "51116c26-64c4-4925-9d84-8856564bc365", 00:35:38.177 "strip_size_kb": 0, 00:35:38.177 "state": "online", 00:35:38.177 "raid_level": "raid1", 00:35:38.177 "superblock": true, 00:35:38.177 "num_base_bdevs": 2, 00:35:38.177 "num_base_bdevs_discovered": 2, 00:35:38.177 "num_base_bdevs_operational": 2, 00:35:38.177 "base_bdevs_list": [ 00:35:38.177 { 00:35:38.177 "name": "BaseBdev1", 00:35:38.177 "uuid": "d1645b76-d583-477f-9dc0-744ae92ba1d0", 00:35:38.177 "is_configured": true, 00:35:38.177 "data_offset": 2048, 00:35:38.177 "data_size": 63488 00:35:38.177 }, 00:35:38.177 { 00:35:38.177 "name": "BaseBdev2", 00:35:38.177 "uuid": "14488879-9bf9-4338-817b-cb901ed3d649", 00:35:38.177 "is_configured": true, 00:35:38.177 "data_offset": 2048, 00:35:38.177 "data_size": 63488 00:35:38.177 } 00:35:38.177 ] 00:35:38.177 } 00:35:38.177 } 00:35:38.177 }' 00:35:38.177 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:38.177 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:38.177 BaseBdev2' 00:35:38.177 05:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.177 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.177 [2024-12-09 05:26:25.139736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.435 "name": "Existed_Raid", 00:35:38.435 "uuid": "51116c26-64c4-4925-9d84-8856564bc365", 00:35:38.435 "strip_size_kb": 0, 00:35:38.435 "state": "online", 00:35:38.435 "raid_level": "raid1", 00:35:38.435 "superblock": true, 00:35:38.435 "num_base_bdevs": 2, 00:35:38.435 "num_base_bdevs_discovered": 1, 00:35:38.435 "num_base_bdevs_operational": 1, 00:35:38.435 "base_bdevs_list": [ 00:35:38.435 { 00:35:38.435 "name": null, 00:35:38.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.435 "is_configured": false, 00:35:38.435 "data_offset": 0, 00:35:38.435 "data_size": 63488 00:35:38.435 }, 00:35:38.435 { 00:35:38.435 "name": "BaseBdev2", 00:35:38.435 "uuid": "14488879-9bf9-4338-817b-cb901ed3d649", 00:35:38.435 "is_configured": true, 00:35:38.435 "data_offset": 2048, 00:35:38.435 "data_size": 63488 00:35:38.435 } 00:35:38.435 ] 00:35:38.435 }' 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.435 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.000 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:39.000 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:39.000 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.000 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.001 [2024-12-09 05:26:25.804746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:39.001 [2024-12-09 05:26:25.804927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:39.001 [2024-12-09 05:26:25.888903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:39.001 [2024-12-09 05:26:25.888986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:39.001 [2024-12-09 05:26:25.889005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62969 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62969 ']' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62969 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.001 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62969 00:35:39.259 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:39.259 killing process with pid 62969 00:35:39.259 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:39.259 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62969' 00:35:39.259 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62969 00:35:39.259 [2024-12-09 05:26:25.980743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:39.259 05:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62969 00:35:39.259 [2024-12-09 05:26:25.995249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:40.194 05:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:40.194 00:35:40.194 real 0m5.571s 00:35:40.194 user 0m8.356s 00:35:40.194 sys 0m0.837s 00:35:40.194 05:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.194 ************************************ 00:35:40.194 END TEST raid_state_function_test_sb 00:35:40.194 ************************************ 00:35:40.194 05:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 05:26:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:35:40.453 05:26:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:40.453 05:26:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.453 05:26:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 ************************************ 00:35:40.453 START TEST raid_superblock_test 00:35:40.453 ************************************ 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63228 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63228 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63228 ']' 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.453 05:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 [2024-12-09 05:26:27.306143] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:40.453 [2024-12-09 05:26:27.306358] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63228 ] 00:35:40.711 [2024-12-09 05:26:27.497981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.711 [2024-12-09 05:26:27.632309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.970 [2024-12-09 05:26:27.843670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:40.970 [2024-12-09 05:26:27.843734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:41.537 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.538 malloc1 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.538 [2024-12-09 05:26:28.357351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:41.538 [2024-12-09 05:26:28.357440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:41.538 [2024-12-09 05:26:28.357470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:41.538 [2024-12-09 05:26:28.357484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:41.538 [2024-12-09 05:26:28.360448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:41.538 [2024-12-09 05:26:28.360487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:41.538 pt1 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.538 malloc2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.538 [2024-12-09 05:26:28.416156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:41.538 [2024-12-09 05:26:28.416235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:41.538 [2024-12-09 05:26:28.416271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:41.538 [2024-12-09 05:26:28.416286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:41.538 [2024-12-09 05:26:28.419415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:41.538 [2024-12-09 05:26:28.419488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:41.538 pt2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.538 [2024-12-09 05:26:28.428150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:41.538 [2024-12-09 05:26:28.430857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:41.538 [2024-12-09 05:26:28.431108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:41.538 [2024-12-09 05:26:28.431134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:41.538 [2024-12-09 05:26:28.431445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:41.538 [2024-12-09 05:26:28.431636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:41.538 [2024-12-09 05:26:28.431665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:41.538 [2024-12-09 05:26:28.431864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:41.538 "name": "raid_bdev1", 00:35:41.538 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:41.538 "strip_size_kb": 0, 00:35:41.538 "state": "online", 00:35:41.538 "raid_level": "raid1", 00:35:41.538 "superblock": true, 00:35:41.538 "num_base_bdevs": 2, 00:35:41.538 "num_base_bdevs_discovered": 2, 00:35:41.538 "num_base_bdevs_operational": 2, 00:35:41.538 "base_bdevs_list": [ 00:35:41.538 { 00:35:41.538 "name": "pt1", 00:35:41.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:41.538 "is_configured": true, 00:35:41.538 "data_offset": 2048, 00:35:41.538 "data_size": 63488 00:35:41.538 }, 00:35:41.538 { 00:35:41.538 "name": "pt2", 00:35:41.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:41.538 "is_configured": true, 00:35:41.538 "data_offset": 2048, 00:35:41.538 "data_size": 63488 00:35:41.538 } 00:35:41.538 ] 00:35:41.538 }' 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:41.538 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.105 [2024-12-09 05:26:28.924765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:42.105 "name": "raid_bdev1", 00:35:42.105 "aliases": [ 00:35:42.105 "694d8c61-157d-4534-afdb-5156b91a94c2" 00:35:42.105 ], 00:35:42.105 "product_name": "Raid Volume", 00:35:42.105 "block_size": 512, 00:35:42.105 "num_blocks": 63488, 00:35:42.105 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:42.105 "assigned_rate_limits": { 00:35:42.105 "rw_ios_per_sec": 0, 00:35:42.105 "rw_mbytes_per_sec": 0, 00:35:42.105 "r_mbytes_per_sec": 0, 00:35:42.105 "w_mbytes_per_sec": 0 00:35:42.105 }, 00:35:42.105 "claimed": false, 00:35:42.105 "zoned": false, 00:35:42.105 "supported_io_types": { 00:35:42.105 "read": true, 00:35:42.105 "write": true, 00:35:42.105 "unmap": false, 00:35:42.105 "flush": false, 00:35:42.105 "reset": true, 00:35:42.105 "nvme_admin": false, 00:35:42.105 "nvme_io": false, 00:35:42.105 "nvme_io_md": false, 00:35:42.105 "write_zeroes": true, 00:35:42.105 "zcopy": false, 00:35:42.105 "get_zone_info": false, 00:35:42.105 "zone_management": false, 00:35:42.105 "zone_append": false, 00:35:42.105 "compare": false, 00:35:42.105 "compare_and_write": false, 00:35:42.105 "abort": false, 00:35:42.105 "seek_hole": false, 00:35:42.105 "seek_data": false, 00:35:42.105 "copy": false, 00:35:42.105 "nvme_iov_md": false 00:35:42.105 }, 00:35:42.105 "memory_domains": [ 00:35:42.105 { 00:35:42.105 "dma_device_id": "system", 00:35:42.105 "dma_device_type": 1 00:35:42.105 }, 00:35:42.105 { 00:35:42.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.105 "dma_device_type": 2 00:35:42.105 }, 00:35:42.105 { 00:35:42.105 "dma_device_id": "system", 00:35:42.105 "dma_device_type": 1 00:35:42.105 }, 00:35:42.105 { 00:35:42.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.105 "dma_device_type": 2 00:35:42.105 } 00:35:42.105 ], 00:35:42.105 "driver_specific": { 00:35:42.105 "raid": { 00:35:42.105 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:42.105 "strip_size_kb": 0, 00:35:42.105 "state": "online", 00:35:42.105 "raid_level": "raid1", 00:35:42.105 "superblock": true, 00:35:42.105 "num_base_bdevs": 2, 00:35:42.105 "num_base_bdevs_discovered": 2, 00:35:42.105 "num_base_bdevs_operational": 2, 00:35:42.105 "base_bdevs_list": [ 00:35:42.105 { 00:35:42.105 "name": "pt1", 00:35:42.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:42.105 "is_configured": true, 00:35:42.105 "data_offset": 2048, 00:35:42.105 "data_size": 63488 00:35:42.105 }, 00:35:42.105 { 00:35:42.105 "name": "pt2", 00:35:42.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:42.105 "is_configured": true, 00:35:42.105 "data_offset": 2048, 00:35:42.105 "data_size": 63488 00:35:42.105 } 00:35:42.105 ] 00:35:42.105 } 00:35:42.105 } 00:35:42.105 }' 00:35:42.105 05:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:42.105 pt2' 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.105 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:42.364 [2024-12-09 05:26:29.172880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=694d8c61-157d-4534-afdb-5156b91a94c2 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 694d8c61-157d-4534-afdb-5156b91a94c2 ']' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 [2024-12-09 05:26:29.220454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:42.364 [2024-12-09 05:26:29.220480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:42.364 [2024-12-09 05:26:29.220592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:42.364 [2024-12-09 05:26:29.220750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:42.364 [2024-12-09 05:26:29.220771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.364 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.622 [2024-12-09 05:26:29.348521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:42.622 [2024-12-09 05:26:29.351189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:42.622 [2024-12-09 05:26:29.351307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:42.622 [2024-12-09 05:26:29.351389] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:42.622 [2024-12-09 05:26:29.351414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:42.622 [2024-12-09 05:26:29.351428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:42.622 request: 00:35:42.622 { 00:35:42.622 "name": "raid_bdev1", 00:35:42.622 "raid_level": "raid1", 00:35:42.622 "base_bdevs": [ 00:35:42.622 "malloc1", 00:35:42.622 "malloc2" 00:35:42.622 ], 00:35:42.622 "superblock": false, 00:35:42.622 "method": "bdev_raid_create", 00:35:42.622 "req_id": 1 00:35:42.622 } 00:35:42.622 Got JSON-RPC error response 00:35:42.622 response: 00:35:42.622 { 00:35:42.622 "code": -17, 00:35:42.622 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:42.622 } 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.622 [2024-12-09 05:26:29.408515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:42.622 [2024-12-09 05:26:29.408584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:42.622 [2024-12-09 05:26:29.408610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:42.622 [2024-12-09 05:26:29.408626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:42.622 [2024-12-09 05:26:29.411707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:42.622 [2024-12-09 05:26:29.411750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:42.622 [2024-12-09 05:26:29.411892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:42.622 [2024-12-09 05:26:29.411973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:42.622 pt1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:42.622 "name": "raid_bdev1", 00:35:42.622 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:42.622 "strip_size_kb": 0, 00:35:42.622 "state": "configuring", 00:35:42.622 "raid_level": "raid1", 00:35:42.622 "superblock": true, 00:35:42.622 "num_base_bdevs": 2, 00:35:42.622 "num_base_bdevs_discovered": 1, 00:35:42.622 "num_base_bdevs_operational": 2, 00:35:42.622 "base_bdevs_list": [ 00:35:42.622 { 00:35:42.622 "name": "pt1", 00:35:42.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:42.622 "is_configured": true, 00:35:42.622 "data_offset": 2048, 00:35:42.622 "data_size": 63488 00:35:42.622 }, 00:35:42.622 { 00:35:42.622 "name": null, 00:35:42.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:42.622 "is_configured": false, 00:35:42.622 "data_offset": 2048, 00:35:42.622 "data_size": 63488 00:35:42.622 } 00:35:42.622 ] 00:35:42.622 }' 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:42.622 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.187 [2024-12-09 05:26:29.944718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:43.187 [2024-12-09 05:26:29.944853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.187 [2024-12-09 05:26:29.944882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:43.187 [2024-12-09 05:26:29.944898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.187 [2024-12-09 05:26:29.945459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.187 [2024-12-09 05:26:29.945504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:43.187 [2024-12-09 05:26:29.945606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:43.187 [2024-12-09 05:26:29.945641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:43.187 [2024-12-09 05:26:29.945833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:43.187 [2024-12-09 05:26:29.945855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:43.187 [2024-12-09 05:26:29.946222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:43.187 [2024-12-09 05:26:29.946454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:43.187 [2024-12-09 05:26:29.946467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:43.187 [2024-12-09 05:26:29.946611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:43.187 pt2 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.187 05:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.187 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:43.187 "name": "raid_bdev1", 00:35:43.187 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:43.187 "strip_size_kb": 0, 00:35:43.187 "state": "online", 00:35:43.187 "raid_level": "raid1", 00:35:43.187 "superblock": true, 00:35:43.187 "num_base_bdevs": 2, 00:35:43.187 "num_base_bdevs_discovered": 2, 00:35:43.187 "num_base_bdevs_operational": 2, 00:35:43.187 "base_bdevs_list": [ 00:35:43.187 { 00:35:43.187 "name": "pt1", 00:35:43.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:43.188 "is_configured": true, 00:35:43.188 "data_offset": 2048, 00:35:43.188 "data_size": 63488 00:35:43.188 }, 00:35:43.188 { 00:35:43.188 "name": "pt2", 00:35:43.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:43.188 "is_configured": true, 00:35:43.188 "data_offset": 2048, 00:35:43.188 "data_size": 63488 00:35:43.188 } 00:35:43.188 ] 00:35:43.188 }' 00:35:43.188 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:43.188 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.753 [2024-12-09 05:26:30.485317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:43.753 "name": "raid_bdev1", 00:35:43.753 "aliases": [ 00:35:43.753 "694d8c61-157d-4534-afdb-5156b91a94c2" 00:35:43.753 ], 00:35:43.753 "product_name": "Raid Volume", 00:35:43.753 "block_size": 512, 00:35:43.753 "num_blocks": 63488, 00:35:43.753 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:43.753 "assigned_rate_limits": { 00:35:43.753 "rw_ios_per_sec": 0, 00:35:43.753 "rw_mbytes_per_sec": 0, 00:35:43.753 "r_mbytes_per_sec": 0, 00:35:43.753 "w_mbytes_per_sec": 0 00:35:43.753 }, 00:35:43.753 "claimed": false, 00:35:43.753 "zoned": false, 00:35:43.753 "supported_io_types": { 00:35:43.753 "read": true, 00:35:43.753 "write": true, 00:35:43.753 "unmap": false, 00:35:43.753 "flush": false, 00:35:43.753 "reset": true, 00:35:43.753 "nvme_admin": false, 00:35:43.753 "nvme_io": false, 00:35:43.753 "nvme_io_md": false, 00:35:43.753 "write_zeroes": true, 00:35:43.753 "zcopy": false, 00:35:43.753 "get_zone_info": false, 00:35:43.753 "zone_management": false, 00:35:43.753 "zone_append": false, 00:35:43.753 "compare": false, 00:35:43.753 "compare_and_write": false, 00:35:43.753 "abort": false, 00:35:43.753 "seek_hole": false, 00:35:43.753 "seek_data": false, 00:35:43.753 "copy": false, 00:35:43.753 "nvme_iov_md": false 00:35:43.753 }, 00:35:43.753 "memory_domains": [ 00:35:43.753 { 00:35:43.753 "dma_device_id": "system", 00:35:43.753 "dma_device_type": 1 00:35:43.753 }, 00:35:43.753 { 00:35:43.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:43.753 "dma_device_type": 2 00:35:43.753 }, 00:35:43.753 { 00:35:43.753 "dma_device_id": "system", 00:35:43.753 "dma_device_type": 1 00:35:43.753 }, 00:35:43.753 { 00:35:43.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:43.753 "dma_device_type": 2 00:35:43.753 } 00:35:43.753 ], 00:35:43.753 "driver_specific": { 00:35:43.753 "raid": { 00:35:43.753 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:43.753 "strip_size_kb": 0, 00:35:43.753 "state": "online", 00:35:43.753 "raid_level": "raid1", 00:35:43.753 "superblock": true, 00:35:43.753 "num_base_bdevs": 2, 00:35:43.753 "num_base_bdevs_discovered": 2, 00:35:43.753 "num_base_bdevs_operational": 2, 00:35:43.753 "base_bdevs_list": [ 00:35:43.753 { 00:35:43.753 "name": "pt1", 00:35:43.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:43.753 "is_configured": true, 00:35:43.753 "data_offset": 2048, 00:35:43.753 "data_size": 63488 00:35:43.753 }, 00:35:43.753 { 00:35:43.753 "name": "pt2", 00:35:43.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:43.753 "is_configured": true, 00:35:43.753 "data_offset": 2048, 00:35:43.753 "data_size": 63488 00:35:43.753 } 00:35:43.753 ] 00:35:43.753 } 00:35:43.753 } 00:35:43.753 }' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:43.753 pt2' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:43.753 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.011 [2024-12-09 05:26:30.745400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 694d8c61-157d-4534-afdb-5156b91a94c2 '!=' 694d8c61-157d-4534-afdb-5156b91a94c2 ']' 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.011 [2024-12-09 05:26:30.793069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:44.011 "name": "raid_bdev1", 00:35:44.011 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:44.011 "strip_size_kb": 0, 00:35:44.011 "state": "online", 00:35:44.011 "raid_level": "raid1", 00:35:44.011 "superblock": true, 00:35:44.011 "num_base_bdevs": 2, 00:35:44.011 "num_base_bdevs_discovered": 1, 00:35:44.011 "num_base_bdevs_operational": 1, 00:35:44.011 "base_bdevs_list": [ 00:35:44.011 { 00:35:44.011 "name": null, 00:35:44.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.011 "is_configured": false, 00:35:44.011 "data_offset": 0, 00:35:44.011 "data_size": 63488 00:35:44.011 }, 00:35:44.011 { 00:35:44.011 "name": "pt2", 00:35:44.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:44.011 "is_configured": true, 00:35:44.011 "data_offset": 2048, 00:35:44.011 "data_size": 63488 00:35:44.011 } 00:35:44.011 ] 00:35:44.011 }' 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:44.011 05:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.582 [2024-12-09 05:26:31.321223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:44.582 [2024-12-09 05:26:31.321254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:44.582 [2024-12-09 05:26:31.321342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:44.582 [2024-12-09 05:26:31.321404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:44.582 [2024-12-09 05:26:31.321422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.582 [2024-12-09 05:26:31.393224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:44.582 [2024-12-09 05:26:31.393297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:44.582 [2024-12-09 05:26:31.393320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:44.582 [2024-12-09 05:26:31.393335] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:44.582 [2024-12-09 05:26:31.396418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:44.582 [2024-12-09 05:26:31.396519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:44.582 [2024-12-09 05:26:31.396626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:44.582 [2024-12-09 05:26:31.396688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:44.582 [2024-12-09 05:26:31.396874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:44.582 [2024-12-09 05:26:31.396896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:44.582 [2024-12-09 05:26:31.397220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:44.582 [2024-12-09 05:26:31.397393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:44.582 [2024-12-09 05:26:31.397408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:35:44.582 [2024-12-09 05:26:31.397605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:44.582 pt2 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:44.582 "name": "raid_bdev1", 00:35:44.582 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:44.582 "strip_size_kb": 0, 00:35:44.582 "state": "online", 00:35:44.582 "raid_level": "raid1", 00:35:44.582 "superblock": true, 00:35:44.582 "num_base_bdevs": 2, 00:35:44.582 "num_base_bdevs_discovered": 1, 00:35:44.582 "num_base_bdevs_operational": 1, 00:35:44.582 "base_bdevs_list": [ 00:35:44.582 { 00:35:44.582 "name": null, 00:35:44.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.582 "is_configured": false, 00:35:44.582 "data_offset": 2048, 00:35:44.582 "data_size": 63488 00:35:44.582 }, 00:35:44.582 { 00:35:44.582 "name": "pt2", 00:35:44.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:44.582 "is_configured": true, 00:35:44.582 "data_offset": 2048, 00:35:44.582 "data_size": 63488 00:35:44.582 } 00:35:44.582 ] 00:35:44.582 }' 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:44.582 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.157 [2024-12-09 05:26:31.929714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:45.157 [2024-12-09 05:26:31.929751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:45.157 [2024-12-09 05:26:31.929886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:45.157 [2024-12-09 05:26:31.930033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:45.157 [2024-12-09 05:26:31.930059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.157 [2024-12-09 05:26:31.993709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:45.157 [2024-12-09 05:26:31.993827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:45.157 [2024-12-09 05:26:31.993859] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:35:45.157 [2024-12-09 05:26:31.993873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:45.157 [2024-12-09 05:26:31.997218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:45.157 [2024-12-09 05:26:31.997260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:45.157 [2024-12-09 05:26:31.997381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:45.157 [2024-12-09 05:26:31.997433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:45.157 [2024-12-09 05:26:31.997621] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:45.157 [2024-12-09 05:26:31.997639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:45.157 [2024-12-09 05:26:31.997659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:35:45.157 [2024-12-09 05:26:31.997715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:45.157 [2024-12-09 05:26:31.997902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:35:45.157 [2024-12-09 05:26:31.997920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:45.157 pt1 00:35:45.157 [2024-12-09 05:26:31.998264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:45.157 [2024-12-09 05:26:31.998504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:35:45.157 [2024-12-09 05:26:31.998525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.157 [2024-12-09 05:26:31.998708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:45.157 05:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:45.157 "name": "raid_bdev1", 00:35:45.157 "uuid": "694d8c61-157d-4534-afdb-5156b91a94c2", 00:35:45.157 "strip_size_kb": 0, 00:35:45.157 "state": "online", 00:35:45.157 "raid_level": "raid1", 00:35:45.157 "superblock": true, 00:35:45.157 "num_base_bdevs": 2, 00:35:45.157 "num_base_bdevs_discovered": 1, 00:35:45.157 "num_base_bdevs_operational": 1, 00:35:45.157 "base_bdevs_list": [ 00:35:45.157 { 00:35:45.157 "name": null, 00:35:45.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.157 "is_configured": false, 00:35:45.157 "data_offset": 2048, 00:35:45.157 "data_size": 63488 00:35:45.157 }, 00:35:45.157 { 00:35:45.157 "name": "pt2", 00:35:45.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:45.157 "is_configured": true, 00:35:45.157 "data_offset": 2048, 00:35:45.157 "data_size": 63488 00:35:45.157 } 00:35:45.157 ] 00:35:45.157 }' 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:45.157 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.723 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:45.723 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:35:45.723 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:35:45.724 [2024-12-09 05:26:32.586265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 694d8c61-157d-4534-afdb-5156b91a94c2 '!=' 694d8c61-157d-4534-afdb-5156b91a94c2 ']' 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63228 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63228 ']' 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63228 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63228 00:35:45.724 killing process with pid 63228 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63228' 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63228 00:35:45.724 [2024-12-09 05:26:32.663817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:45.724 [2024-12-09 05:26:32.663917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:45.724 05:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63228 00:35:45.724 [2024-12-09 05:26:32.663977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:45.724 [2024-12-09 05:26:32.663999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:35:45.982 [2024-12-09 05:26:32.845704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:47.357 05:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:35:47.357 00:35:47.357 real 0m6.882s 00:35:47.357 user 0m10.727s 00:35:47.357 sys 0m1.045s 00:35:47.357 ************************************ 00:35:47.357 END TEST raid_superblock_test 00:35:47.357 ************************************ 00:35:47.357 05:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:47.357 05:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:47.357 05:26:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:35:47.357 05:26:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:47.357 05:26:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.357 05:26:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:47.357 ************************************ 00:35:47.357 START TEST raid_read_error_test 00:35:47.357 ************************************ 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sf0q9NLXVI 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63558 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63558 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63558 ']' 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.357 05:26:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:47.357 [2024-12-09 05:26:34.260565] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:47.357 [2024-12-09 05:26:34.260754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63558 ] 00:35:47.616 [2024-12-09 05:26:34.451142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.875 [2024-12-09 05:26:34.587658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.875 [2024-12-09 05:26:34.814465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:47.875 [2024-12-09 05:26:34.814834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 BaseBdev1_malloc 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 true 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 [2024-12-09 05:26:35.264377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:48.504 [2024-12-09 05:26:35.264458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:48.504 [2024-12-09 05:26:35.264485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:48.504 [2024-12-09 05:26:35.264501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:48.504 [2024-12-09 05:26:35.267568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:48.504 [2024-12-09 05:26:35.267629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:48.504 BaseBdev1 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 BaseBdev2_malloc 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 true 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 [2024-12-09 05:26:35.331753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:48.504 [2024-12-09 05:26:35.331843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:48.504 [2024-12-09 05:26:35.331867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:48.504 [2024-12-09 05:26:35.331882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:48.504 [2024-12-09 05:26:35.334768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:48.504 [2024-12-09 05:26:35.335069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:48.504 BaseBdev2 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 [2024-12-09 05:26:35.339945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:48.504 [2024-12-09 05:26:35.342583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:48.504 [2024-12-09 05:26:35.343000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:48.504 [2024-12-09 05:26:35.343029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:48.504 [2024-12-09 05:26:35.343341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:48.504 [2024-12-09 05:26:35.343558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:48.504 [2024-12-09 05:26:35.343573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:48.504 [2024-12-09 05:26:35.343736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:48.504 "name": "raid_bdev1", 00:35:48.504 "uuid": "c804242b-8b75-4299-89ff-37f86f6f3aa8", 00:35:48.504 "strip_size_kb": 0, 00:35:48.504 "state": "online", 00:35:48.504 "raid_level": "raid1", 00:35:48.504 "superblock": true, 00:35:48.504 "num_base_bdevs": 2, 00:35:48.504 "num_base_bdevs_discovered": 2, 00:35:48.504 "num_base_bdevs_operational": 2, 00:35:48.504 "base_bdevs_list": [ 00:35:48.504 { 00:35:48.504 "name": "BaseBdev1", 00:35:48.504 "uuid": "7f968fb7-f75f-53ea-8f0c-d0e3a906dbd6", 00:35:48.504 "is_configured": true, 00:35:48.504 "data_offset": 2048, 00:35:48.504 "data_size": 63488 00:35:48.504 }, 00:35:48.504 { 00:35:48.504 "name": "BaseBdev2", 00:35:48.504 "uuid": "f4d99bf8-c9f5-50e2-829a-ac92dfb2bb18", 00:35:48.504 "is_configured": true, 00:35:48.504 "data_offset": 2048, 00:35:48.504 "data_size": 63488 00:35:48.504 } 00:35:48.504 ] 00:35:48.504 }' 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:48.504 05:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:49.070 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:49.070 05:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:49.070 [2024-12-09 05:26:35.977844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:50.004 "name": "raid_bdev1", 00:35:50.004 "uuid": "c804242b-8b75-4299-89ff-37f86f6f3aa8", 00:35:50.004 "strip_size_kb": 0, 00:35:50.004 "state": "online", 00:35:50.004 "raid_level": "raid1", 00:35:50.004 "superblock": true, 00:35:50.004 "num_base_bdevs": 2, 00:35:50.004 "num_base_bdevs_discovered": 2, 00:35:50.004 "num_base_bdevs_operational": 2, 00:35:50.004 "base_bdevs_list": [ 00:35:50.004 { 00:35:50.004 "name": "BaseBdev1", 00:35:50.004 "uuid": "7f968fb7-f75f-53ea-8f0c-d0e3a906dbd6", 00:35:50.004 "is_configured": true, 00:35:50.004 "data_offset": 2048, 00:35:50.004 "data_size": 63488 00:35:50.004 }, 00:35:50.004 { 00:35:50.004 "name": "BaseBdev2", 00:35:50.004 "uuid": "f4d99bf8-c9f5-50e2-829a-ac92dfb2bb18", 00:35:50.004 "is_configured": true, 00:35:50.004 "data_offset": 2048, 00:35:50.004 "data_size": 63488 00:35:50.004 } 00:35:50.004 ] 00:35:50.004 }' 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:50.004 05:26:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:50.572 [2024-12-09 05:26:37.408413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:50.572 [2024-12-09 05:26:37.408465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:50.572 [2024-12-09 05:26:37.412495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:50.572 [2024-12-09 05:26:37.412855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:50.572 [2024-12-09 05:26:37.413111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:50.572 [2024-12-09 05:26:37.413283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:50.572 { 00:35:50.572 "results": [ 00:35:50.572 { 00:35:50.572 "job": "raid_bdev1", 00:35:50.572 "core_mask": "0x1", 00:35:50.572 "workload": "randrw", 00:35:50.572 "percentage": 50, 00:35:50.572 "status": "finished", 00:35:50.572 "queue_depth": 1, 00:35:50.572 "io_size": 131072, 00:35:50.572 "runtime": 1.427507, 00:35:50.572 "iops": 12069.99335204661, 00:35:50.572 "mibps": 1508.7491690058262, 00:35:50.572 "io_failed": 0, 00:35:50.572 "io_timeout": 0, 00:35:50.572 "avg_latency_us": 78.52062174853586, 00:35:50.572 "min_latency_us": 37.46909090909091, 00:35:50.572 "max_latency_us": 1884.16 00:35:50.572 } 00:35:50.572 ], 00:35:50.572 "core_count": 1 00:35:50.572 } 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63558 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63558 ']' 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63558 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63558 00:35:50.572 killing process with pid 63558 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63558' 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63558 00:35:50.572 [2024-12-09 05:26:37.455929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:50.572 05:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63558 00:35:50.830 [2024-12-09 05:26:37.580254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sf0q9NLXVI 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:35:52.206 00:35:52.206 real 0m4.710s 00:35:52.206 user 0m5.724s 00:35:52.206 sys 0m0.671s 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.206 ************************************ 00:35:52.206 END TEST raid_read_error_test 00:35:52.206 ************************************ 00:35:52.206 05:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:52.206 05:26:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:35:52.206 05:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:52.206 05:26:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.206 05:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:52.206 ************************************ 00:35:52.206 START TEST raid_write_error_test 00:35:52.206 ************************************ 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TS29bMnySF 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63709 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63709 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63709 ']' 00:35:52.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.206 05:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:52.206 [2024-12-09 05:26:39.030081] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:52.206 [2024-12-09 05:26:39.030323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63709 ] 00:35:52.465 [2024-12-09 05:26:39.221622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.465 [2024-12-09 05:26:39.367270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.724 [2024-12-09 05:26:39.594819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:52.724 [2024-12-09 05:26:39.594903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:53.290 05:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.290 05:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:53.290 05:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:53.291 05:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:53.291 05:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 BaseBdev1_malloc 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 true 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 [2024-12-09 05:26:40.022262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:53.291 [2024-12-09 05:26:40.022355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:53.291 [2024-12-09 05:26:40.022387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:53.291 [2024-12-09 05:26:40.022439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:53.291 [2024-12-09 05:26:40.025620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:53.291 [2024-12-09 05:26:40.025703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:53.291 BaseBdev1 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 BaseBdev2_malloc 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 true 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 [2024-12-09 05:26:40.090873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:53.291 [2024-12-09 05:26:40.090960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:53.291 [2024-12-09 05:26:40.090986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:53.291 [2024-12-09 05:26:40.091002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:53.291 [2024-12-09 05:26:40.093760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:53.291 [2024-12-09 05:26:40.093826] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:53.291 BaseBdev2 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 [2024-12-09 05:26:40.098993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:53.291 [2024-12-09 05:26:40.101530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:53.291 [2024-12-09 05:26:40.101910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:53.291 [2024-12-09 05:26:40.102119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:53.291 [2024-12-09 05:26:40.102528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:53.291 [2024-12-09 05:26:40.102926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:53.291 [2024-12-09 05:26:40.103048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:53.291 [2024-12-09 05:26:40.103512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:53.291 "name": "raid_bdev1", 00:35:53.291 "uuid": "b2d10b61-4bc8-40f6-8f0a-b2959cdbe6ef", 00:35:53.291 "strip_size_kb": 0, 00:35:53.291 "state": "online", 00:35:53.291 "raid_level": "raid1", 00:35:53.291 "superblock": true, 00:35:53.291 "num_base_bdevs": 2, 00:35:53.291 "num_base_bdevs_discovered": 2, 00:35:53.291 "num_base_bdevs_operational": 2, 00:35:53.291 "base_bdevs_list": [ 00:35:53.291 { 00:35:53.291 "name": "BaseBdev1", 00:35:53.291 "uuid": "4b591076-6783-59ad-89d0-6d4467690c02", 00:35:53.291 "is_configured": true, 00:35:53.291 "data_offset": 2048, 00:35:53.291 "data_size": 63488 00:35:53.291 }, 00:35:53.291 { 00:35:53.291 "name": "BaseBdev2", 00:35:53.291 "uuid": "604fb3c3-382d-5b28-9be4-1022dd22733c", 00:35:53.291 "is_configured": true, 00:35:53.291 "data_offset": 2048, 00:35:53.291 "data_size": 63488 00:35:53.291 } 00:35:53.291 ] 00:35:53.291 }' 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:53.291 05:26:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.891 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:53.891 05:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:53.891 [2024-12-09 05:26:40.773011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:54.824 [2024-12-09 05:26:41.650175] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:35:54.824 [2024-12-09 05:26:41.650505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:54.824 [2024-12-09 05:26:41.650795] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:54.824 "name": "raid_bdev1", 00:35:54.824 "uuid": "b2d10b61-4bc8-40f6-8f0a-b2959cdbe6ef", 00:35:54.824 "strip_size_kb": 0, 00:35:54.824 "state": "online", 00:35:54.824 "raid_level": "raid1", 00:35:54.824 "superblock": true, 00:35:54.824 "num_base_bdevs": 2, 00:35:54.824 "num_base_bdevs_discovered": 1, 00:35:54.824 "num_base_bdevs_operational": 1, 00:35:54.824 "base_bdevs_list": [ 00:35:54.824 { 00:35:54.824 "name": null, 00:35:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:54.824 "is_configured": false, 00:35:54.824 "data_offset": 0, 00:35:54.824 "data_size": 63488 00:35:54.824 }, 00:35:54.824 { 00:35:54.824 "name": "BaseBdev2", 00:35:54.824 "uuid": "604fb3c3-382d-5b28-9be4-1022dd22733c", 00:35:54.824 "is_configured": true, 00:35:54.824 "data_offset": 2048, 00:35:54.824 "data_size": 63488 00:35:54.824 } 00:35:54.824 ] 00:35:54.824 }' 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:54.824 05:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.391 [2024-12-09 05:26:42.177709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:55.391 [2024-12-09 05:26:42.177745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:55.391 [2024-12-09 05:26:42.181257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:55.391 [2024-12-09 05:26:42.181304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:55.391 [2024-12-09 05:26:42.181379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:55.391 [2024-12-09 05:26:42.181397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:55.391 { 00:35:55.391 "results": [ 00:35:55.391 { 00:35:55.391 "job": "raid_bdev1", 00:35:55.391 "core_mask": "0x1", 00:35:55.391 "workload": "randrw", 00:35:55.391 "percentage": 50, 00:35:55.391 "status": "finished", 00:35:55.391 "queue_depth": 1, 00:35:55.391 "io_size": 131072, 00:35:55.391 "runtime": 1.402283, 00:35:55.391 "iops": 14372.990330767756, 00:35:55.391 "mibps": 1796.6237913459695, 00:35:55.391 "io_failed": 0, 00:35:55.391 "io_timeout": 0, 00:35:55.391 "avg_latency_us": 65.28968782842065, 00:35:55.391 "min_latency_us": 37.46909090909091, 00:35:55.391 "max_latency_us": 1824.581818181818 00:35:55.391 } 00:35:55.391 ], 00:35:55.391 "core_count": 1 00:35:55.391 } 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63709 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63709 ']' 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63709 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63709 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63709' 00:35:55.391 killing process with pid 63709 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63709 00:35:55.391 [2024-12-09 05:26:42.215658] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:55.391 05:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63709 00:35:55.391 [2024-12-09 05:26:42.335144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TS29bMnySF 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:35:56.769 ************************************ 00:35:56.769 END TEST raid_write_error_test 00:35:56.769 ************************************ 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:35:56.769 00:35:56.769 real 0m4.707s 00:35:56.769 user 0m5.747s 00:35:56.769 sys 0m0.663s 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.769 05:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.769 05:26:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:35:56.769 05:26:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:35:56.769 05:26:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:35:56.769 05:26:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:56.769 05:26:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.769 05:26:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:56.769 ************************************ 00:35:56.769 START TEST raid_state_function_test 00:35:56.769 ************************************ 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:35:56.769 Process raid pid: 63853 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63853 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63853' 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63853 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63853 ']' 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.769 05:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.029 [2024-12-09 05:26:43.774819] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:35:57.029 [2024-12-09 05:26:43.775275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:57.029 [2024-12-09 05:26:43.961959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.287 [2024-12-09 05:26:44.146477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.545 [2024-12-09 05:26:44.380549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:57.545 [2024-12-09 05:26:44.380595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.873 [2024-12-09 05:26:44.706019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:57.873 [2024-12-09 05:26:44.706095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:57.873 [2024-12-09 05:26:44.706114] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:57.873 [2024-12-09 05:26:44.706130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:57.873 [2024-12-09 05:26:44.706140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:57.873 [2024-12-09 05:26:44.706157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:57.873 "name": "Existed_Raid", 00:35:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:57.873 "strip_size_kb": 64, 00:35:57.873 "state": "configuring", 00:35:57.873 "raid_level": "raid0", 00:35:57.873 "superblock": false, 00:35:57.873 "num_base_bdevs": 3, 00:35:57.873 "num_base_bdevs_discovered": 0, 00:35:57.873 "num_base_bdevs_operational": 3, 00:35:57.873 "base_bdevs_list": [ 00:35:57.873 { 00:35:57.873 "name": "BaseBdev1", 00:35:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:57.873 "is_configured": false, 00:35:57.873 "data_offset": 0, 00:35:57.873 "data_size": 0 00:35:57.873 }, 00:35:57.873 { 00:35:57.873 "name": "BaseBdev2", 00:35:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:57.873 "is_configured": false, 00:35:57.873 "data_offset": 0, 00:35:57.873 "data_size": 0 00:35:57.873 }, 00:35:57.873 { 00:35:57.873 "name": "BaseBdev3", 00:35:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:57.873 "is_configured": false, 00:35:57.873 "data_offset": 0, 00:35:57.873 "data_size": 0 00:35:57.873 } 00:35:57.873 ] 00:35:57.873 }' 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:57.873 05:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 [2024-12-09 05:26:45.234200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:58.440 [2024-12-09 05:26:45.234446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 [2024-12-09 05:26:45.246310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:58.440 [2024-12-09 05:26:45.246417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:58.440 [2024-12-09 05:26:45.246472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:58.440 [2024-12-09 05:26:45.246521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:58.440 [2024-12-09 05:26:45.246535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:58.440 [2024-12-09 05:26:45.246566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 [2024-12-09 05:26:45.296683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:58.440 BaseBdev1 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 [ 00:35:58.440 { 00:35:58.440 "name": "BaseBdev1", 00:35:58.440 "aliases": [ 00:35:58.440 "3521359e-b9f0-4ef3-a432-c027116734af" 00:35:58.440 ], 00:35:58.440 "product_name": "Malloc disk", 00:35:58.440 "block_size": 512, 00:35:58.440 "num_blocks": 65536, 00:35:58.440 "uuid": "3521359e-b9f0-4ef3-a432-c027116734af", 00:35:58.440 "assigned_rate_limits": { 00:35:58.440 "rw_ios_per_sec": 0, 00:35:58.440 "rw_mbytes_per_sec": 0, 00:35:58.440 "r_mbytes_per_sec": 0, 00:35:58.440 "w_mbytes_per_sec": 0 00:35:58.440 }, 00:35:58.440 "claimed": true, 00:35:58.440 "claim_type": "exclusive_write", 00:35:58.440 "zoned": false, 00:35:58.440 "supported_io_types": { 00:35:58.440 "read": true, 00:35:58.440 "write": true, 00:35:58.440 "unmap": true, 00:35:58.440 "flush": true, 00:35:58.440 "reset": true, 00:35:58.440 "nvme_admin": false, 00:35:58.440 "nvme_io": false, 00:35:58.440 "nvme_io_md": false, 00:35:58.440 "write_zeroes": true, 00:35:58.440 "zcopy": true, 00:35:58.440 "get_zone_info": false, 00:35:58.440 "zone_management": false, 00:35:58.440 "zone_append": false, 00:35:58.440 "compare": false, 00:35:58.440 "compare_and_write": false, 00:35:58.440 "abort": true, 00:35:58.440 "seek_hole": false, 00:35:58.440 "seek_data": false, 00:35:58.440 "copy": true, 00:35:58.440 "nvme_iov_md": false 00:35:58.440 }, 00:35:58.440 "memory_domains": [ 00:35:58.440 { 00:35:58.440 "dma_device_id": "system", 00:35:58.440 "dma_device_type": 1 00:35:58.440 }, 00:35:58.440 { 00:35:58.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:58.440 "dma_device_type": 2 00:35:58.440 } 00:35:58.440 ], 00:35:58.440 "driver_specific": {} 00:35:58.440 } 00:35:58.440 ] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.440 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.440 "name": "Existed_Raid", 00:35:58.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.440 "strip_size_kb": 64, 00:35:58.440 "state": "configuring", 00:35:58.440 "raid_level": "raid0", 00:35:58.440 "superblock": false, 00:35:58.440 "num_base_bdevs": 3, 00:35:58.440 "num_base_bdevs_discovered": 1, 00:35:58.440 "num_base_bdevs_operational": 3, 00:35:58.440 "base_bdevs_list": [ 00:35:58.440 { 00:35:58.440 "name": "BaseBdev1", 00:35:58.441 "uuid": "3521359e-b9f0-4ef3-a432-c027116734af", 00:35:58.441 "is_configured": true, 00:35:58.441 "data_offset": 0, 00:35:58.441 "data_size": 65536 00:35:58.441 }, 00:35:58.441 { 00:35:58.441 "name": "BaseBdev2", 00:35:58.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.441 "is_configured": false, 00:35:58.441 "data_offset": 0, 00:35:58.441 "data_size": 0 00:35:58.441 }, 00:35:58.441 { 00:35:58.441 "name": "BaseBdev3", 00:35:58.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.441 "is_configured": false, 00:35:58.441 "data_offset": 0, 00:35:58.441 "data_size": 0 00:35:58.441 } 00:35:58.441 ] 00:35:58.441 }' 00:35:58.441 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.441 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.007 [2024-12-09 05:26:45.804886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:59.007 [2024-12-09 05:26:45.804940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.007 [2024-12-09 05:26:45.812968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:59.007 [2024-12-09 05:26:45.815513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:59.007 [2024-12-09 05:26:45.815578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:59.007 [2024-12-09 05:26:45.815594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:59.007 [2024-12-09 05:26:45.815609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.007 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:59.007 "name": "Existed_Raid", 00:35:59.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.008 "strip_size_kb": 64, 00:35:59.008 "state": "configuring", 00:35:59.008 "raid_level": "raid0", 00:35:59.008 "superblock": false, 00:35:59.008 "num_base_bdevs": 3, 00:35:59.008 "num_base_bdevs_discovered": 1, 00:35:59.008 "num_base_bdevs_operational": 3, 00:35:59.008 "base_bdevs_list": [ 00:35:59.008 { 00:35:59.008 "name": "BaseBdev1", 00:35:59.008 "uuid": "3521359e-b9f0-4ef3-a432-c027116734af", 00:35:59.008 "is_configured": true, 00:35:59.008 "data_offset": 0, 00:35:59.008 "data_size": 65536 00:35:59.008 }, 00:35:59.008 { 00:35:59.008 "name": "BaseBdev2", 00:35:59.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.008 "is_configured": false, 00:35:59.008 "data_offset": 0, 00:35:59.008 "data_size": 0 00:35:59.008 }, 00:35:59.008 { 00:35:59.008 "name": "BaseBdev3", 00:35:59.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.008 "is_configured": false, 00:35:59.008 "data_offset": 0, 00:35:59.008 "data_size": 0 00:35:59.008 } 00:35:59.008 ] 00:35:59.008 }' 00:35:59.008 05:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:59.008 05:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.574 [2024-12-09 05:26:46.361553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:59.574 BaseBdev2 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.574 [ 00:35:59.574 { 00:35:59.574 "name": "BaseBdev2", 00:35:59.574 "aliases": [ 00:35:59.574 "b4258ab3-a9c8-455a-bcf5-9204c3425a1b" 00:35:59.574 ], 00:35:59.574 "product_name": "Malloc disk", 00:35:59.574 "block_size": 512, 00:35:59.574 "num_blocks": 65536, 00:35:59.574 "uuid": "b4258ab3-a9c8-455a-bcf5-9204c3425a1b", 00:35:59.574 "assigned_rate_limits": { 00:35:59.574 "rw_ios_per_sec": 0, 00:35:59.574 "rw_mbytes_per_sec": 0, 00:35:59.574 "r_mbytes_per_sec": 0, 00:35:59.574 "w_mbytes_per_sec": 0 00:35:59.574 }, 00:35:59.574 "claimed": true, 00:35:59.574 "claim_type": "exclusive_write", 00:35:59.574 "zoned": false, 00:35:59.574 "supported_io_types": { 00:35:59.574 "read": true, 00:35:59.574 "write": true, 00:35:59.574 "unmap": true, 00:35:59.574 "flush": true, 00:35:59.574 "reset": true, 00:35:59.574 "nvme_admin": false, 00:35:59.574 "nvme_io": false, 00:35:59.574 "nvme_io_md": false, 00:35:59.574 "write_zeroes": true, 00:35:59.574 "zcopy": true, 00:35:59.574 "get_zone_info": false, 00:35:59.574 "zone_management": false, 00:35:59.574 "zone_append": false, 00:35:59.574 "compare": false, 00:35:59.574 "compare_and_write": false, 00:35:59.574 "abort": true, 00:35:59.574 "seek_hole": false, 00:35:59.574 "seek_data": false, 00:35:59.574 "copy": true, 00:35:59.574 "nvme_iov_md": false 00:35:59.574 }, 00:35:59.574 "memory_domains": [ 00:35:59.574 { 00:35:59.574 "dma_device_id": "system", 00:35:59.574 "dma_device_type": 1 00:35:59.574 }, 00:35:59.574 { 00:35:59.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.574 "dma_device_type": 2 00:35:59.574 } 00:35:59.574 ], 00:35:59.574 "driver_specific": {} 00:35:59.574 } 00:35:59.574 ] 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:59.574 "name": "Existed_Raid", 00:35:59.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.574 "strip_size_kb": 64, 00:35:59.574 "state": "configuring", 00:35:59.574 "raid_level": "raid0", 00:35:59.574 "superblock": false, 00:35:59.574 "num_base_bdevs": 3, 00:35:59.574 "num_base_bdevs_discovered": 2, 00:35:59.574 "num_base_bdevs_operational": 3, 00:35:59.574 "base_bdevs_list": [ 00:35:59.574 { 00:35:59.574 "name": "BaseBdev1", 00:35:59.574 "uuid": "3521359e-b9f0-4ef3-a432-c027116734af", 00:35:59.574 "is_configured": true, 00:35:59.574 "data_offset": 0, 00:35:59.574 "data_size": 65536 00:35:59.574 }, 00:35:59.574 { 00:35:59.574 "name": "BaseBdev2", 00:35:59.574 "uuid": "b4258ab3-a9c8-455a-bcf5-9204c3425a1b", 00:35:59.574 "is_configured": true, 00:35:59.574 "data_offset": 0, 00:35:59.574 "data_size": 65536 00:35:59.574 }, 00:35:59.574 { 00:35:59.574 "name": "BaseBdev3", 00:35:59.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.574 "is_configured": false, 00:35:59.574 "data_offset": 0, 00:35:59.574 "data_size": 0 00:35:59.574 } 00:35:59.574 ] 00:35:59.574 }' 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:59.574 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.141 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:00.141 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.141 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.141 [2024-12-09 05:26:46.961514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:00.141 [2024-12-09 05:26:46.961571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:00.141 [2024-12-09 05:26:46.961594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:00.142 [2024-12-09 05:26:46.961992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:00.142 [2024-12-09 05:26:46.962304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:00.142 [2024-12-09 05:26:46.962330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:00.142 [2024-12-09 05:26:46.962696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:00.142 BaseBdev3 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.142 [ 00:36:00.142 { 00:36:00.142 "name": "BaseBdev3", 00:36:00.142 "aliases": [ 00:36:00.142 "d7459528-6776-4e61-a525-c34cbf67ff6e" 00:36:00.142 ], 00:36:00.142 "product_name": "Malloc disk", 00:36:00.142 "block_size": 512, 00:36:00.142 "num_blocks": 65536, 00:36:00.142 "uuid": "d7459528-6776-4e61-a525-c34cbf67ff6e", 00:36:00.142 "assigned_rate_limits": { 00:36:00.142 "rw_ios_per_sec": 0, 00:36:00.142 "rw_mbytes_per_sec": 0, 00:36:00.142 "r_mbytes_per_sec": 0, 00:36:00.142 "w_mbytes_per_sec": 0 00:36:00.142 }, 00:36:00.142 "claimed": true, 00:36:00.142 "claim_type": "exclusive_write", 00:36:00.142 "zoned": false, 00:36:00.142 "supported_io_types": { 00:36:00.142 "read": true, 00:36:00.142 "write": true, 00:36:00.142 "unmap": true, 00:36:00.142 "flush": true, 00:36:00.142 "reset": true, 00:36:00.142 "nvme_admin": false, 00:36:00.142 "nvme_io": false, 00:36:00.142 "nvme_io_md": false, 00:36:00.142 "write_zeroes": true, 00:36:00.142 "zcopy": true, 00:36:00.142 "get_zone_info": false, 00:36:00.142 "zone_management": false, 00:36:00.142 "zone_append": false, 00:36:00.142 "compare": false, 00:36:00.142 "compare_and_write": false, 00:36:00.142 "abort": true, 00:36:00.142 "seek_hole": false, 00:36:00.142 "seek_data": false, 00:36:00.142 "copy": true, 00:36:00.142 "nvme_iov_md": false 00:36:00.142 }, 00:36:00.142 "memory_domains": [ 00:36:00.142 { 00:36:00.142 "dma_device_id": "system", 00:36:00.142 "dma_device_type": 1 00:36:00.142 }, 00:36:00.142 { 00:36:00.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:00.142 "dma_device_type": 2 00:36:00.142 } 00:36:00.142 ], 00:36:00.142 "driver_specific": {} 00:36:00.142 } 00:36:00.142 ] 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.142 05:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.142 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.142 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.142 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:00.142 "name": "Existed_Raid", 00:36:00.142 "uuid": "1d73f530-59e1-4370-a6fe-94150ce056db", 00:36:00.142 "strip_size_kb": 64, 00:36:00.142 "state": "online", 00:36:00.142 "raid_level": "raid0", 00:36:00.142 "superblock": false, 00:36:00.142 "num_base_bdevs": 3, 00:36:00.142 "num_base_bdevs_discovered": 3, 00:36:00.142 "num_base_bdevs_operational": 3, 00:36:00.142 "base_bdevs_list": [ 00:36:00.142 { 00:36:00.142 "name": "BaseBdev1", 00:36:00.142 "uuid": "3521359e-b9f0-4ef3-a432-c027116734af", 00:36:00.142 "is_configured": true, 00:36:00.142 "data_offset": 0, 00:36:00.142 "data_size": 65536 00:36:00.142 }, 00:36:00.142 { 00:36:00.142 "name": "BaseBdev2", 00:36:00.142 "uuid": "b4258ab3-a9c8-455a-bcf5-9204c3425a1b", 00:36:00.142 "is_configured": true, 00:36:00.142 "data_offset": 0, 00:36:00.142 "data_size": 65536 00:36:00.142 }, 00:36:00.142 { 00:36:00.142 "name": "BaseBdev3", 00:36:00.142 "uuid": "d7459528-6776-4e61-a525-c34cbf67ff6e", 00:36:00.142 "is_configured": true, 00:36:00.142 "data_offset": 0, 00:36:00.142 "data_size": 65536 00:36:00.142 } 00:36:00.142 ] 00:36:00.142 }' 00:36:00.142 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:00.142 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:00.710 [2024-12-09 05:26:47.506251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:00.710 "name": "Existed_Raid", 00:36:00.710 "aliases": [ 00:36:00.710 "1d73f530-59e1-4370-a6fe-94150ce056db" 00:36:00.710 ], 00:36:00.710 "product_name": "Raid Volume", 00:36:00.710 "block_size": 512, 00:36:00.710 "num_blocks": 196608, 00:36:00.710 "uuid": "1d73f530-59e1-4370-a6fe-94150ce056db", 00:36:00.710 "assigned_rate_limits": { 00:36:00.710 "rw_ios_per_sec": 0, 00:36:00.710 "rw_mbytes_per_sec": 0, 00:36:00.710 "r_mbytes_per_sec": 0, 00:36:00.710 "w_mbytes_per_sec": 0 00:36:00.710 }, 00:36:00.710 "claimed": false, 00:36:00.710 "zoned": false, 00:36:00.710 "supported_io_types": { 00:36:00.710 "read": true, 00:36:00.710 "write": true, 00:36:00.710 "unmap": true, 00:36:00.710 "flush": true, 00:36:00.710 "reset": true, 00:36:00.710 "nvme_admin": false, 00:36:00.710 "nvme_io": false, 00:36:00.710 "nvme_io_md": false, 00:36:00.710 "write_zeroes": true, 00:36:00.710 "zcopy": false, 00:36:00.710 "get_zone_info": false, 00:36:00.710 "zone_management": false, 00:36:00.710 "zone_append": false, 00:36:00.710 "compare": false, 00:36:00.710 "compare_and_write": false, 00:36:00.710 "abort": false, 00:36:00.710 "seek_hole": false, 00:36:00.710 "seek_data": false, 00:36:00.710 "copy": false, 00:36:00.710 "nvme_iov_md": false 00:36:00.710 }, 00:36:00.710 "memory_domains": [ 00:36:00.710 { 00:36:00.710 "dma_device_id": "system", 00:36:00.710 "dma_device_type": 1 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:00.710 "dma_device_type": 2 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "dma_device_id": "system", 00:36:00.710 "dma_device_type": 1 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:00.710 "dma_device_type": 2 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "dma_device_id": "system", 00:36:00.710 "dma_device_type": 1 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:00.710 "dma_device_type": 2 00:36:00.710 } 00:36:00.710 ], 00:36:00.710 "driver_specific": { 00:36:00.710 "raid": { 00:36:00.710 "uuid": "1d73f530-59e1-4370-a6fe-94150ce056db", 00:36:00.710 "strip_size_kb": 64, 00:36:00.710 "state": "online", 00:36:00.710 "raid_level": "raid0", 00:36:00.710 "superblock": false, 00:36:00.710 "num_base_bdevs": 3, 00:36:00.710 "num_base_bdevs_discovered": 3, 00:36:00.710 "num_base_bdevs_operational": 3, 00:36:00.710 "base_bdevs_list": [ 00:36:00.710 { 00:36:00.710 "name": "BaseBdev1", 00:36:00.710 "uuid": "3521359e-b9f0-4ef3-a432-c027116734af", 00:36:00.710 "is_configured": true, 00:36:00.710 "data_offset": 0, 00:36:00.710 "data_size": 65536 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "name": "BaseBdev2", 00:36:00.710 "uuid": "b4258ab3-a9c8-455a-bcf5-9204c3425a1b", 00:36:00.710 "is_configured": true, 00:36:00.710 "data_offset": 0, 00:36:00.710 "data_size": 65536 00:36:00.710 }, 00:36:00.710 { 00:36:00.710 "name": "BaseBdev3", 00:36:00.710 "uuid": "d7459528-6776-4e61-a525-c34cbf67ff6e", 00:36:00.710 "is_configured": true, 00:36:00.710 "data_offset": 0, 00:36:00.710 "data_size": 65536 00:36:00.710 } 00:36:00.710 ] 00:36:00.710 } 00:36:00.710 } 00:36:00.710 }' 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:00.710 BaseBdev2 00:36:00.710 BaseBdev3' 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:00.710 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 [2024-12-09 05:26:47.833883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:00.969 [2024-12-09 05:26:47.834110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:00.969 [2024-12-09 05:26:47.834210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.969 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.227 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:01.227 "name": "Existed_Raid", 00:36:01.227 "uuid": "1d73f530-59e1-4370-a6fe-94150ce056db", 00:36:01.227 "strip_size_kb": 64, 00:36:01.227 "state": "offline", 00:36:01.227 "raid_level": "raid0", 00:36:01.227 "superblock": false, 00:36:01.227 "num_base_bdevs": 3, 00:36:01.227 "num_base_bdevs_discovered": 2, 00:36:01.227 "num_base_bdevs_operational": 2, 00:36:01.227 "base_bdevs_list": [ 00:36:01.227 { 00:36:01.227 "name": null, 00:36:01.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.227 "is_configured": false, 00:36:01.227 "data_offset": 0, 00:36:01.227 "data_size": 65536 00:36:01.227 }, 00:36:01.227 { 00:36:01.227 "name": "BaseBdev2", 00:36:01.227 "uuid": "b4258ab3-a9c8-455a-bcf5-9204c3425a1b", 00:36:01.227 "is_configured": true, 00:36:01.227 "data_offset": 0, 00:36:01.227 "data_size": 65536 00:36:01.227 }, 00:36:01.227 { 00:36:01.227 "name": "BaseBdev3", 00:36:01.227 "uuid": "d7459528-6776-4e61-a525-c34cbf67ff6e", 00:36:01.227 "is_configured": true, 00:36:01.227 "data_offset": 0, 00:36:01.227 "data_size": 65536 00:36:01.227 } 00:36:01.227 ] 00:36:01.227 }' 00:36:01.227 05:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:01.227 05:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:01.485 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.743 [2024-12-09 05:26:48.487617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.743 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.744 [2024-12-09 05:26:48.627316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:01.744 [2024-12-09 05:26:48.627417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 BaseBdev2 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 [ 00:36:02.003 { 00:36:02.003 "name": "BaseBdev2", 00:36:02.003 "aliases": [ 00:36:02.003 "0f1c3b61-a02b-481f-84c2-69a7c621d2d3" 00:36:02.003 ], 00:36:02.003 "product_name": "Malloc disk", 00:36:02.003 "block_size": 512, 00:36:02.003 "num_blocks": 65536, 00:36:02.003 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:02.003 "assigned_rate_limits": { 00:36:02.003 "rw_ios_per_sec": 0, 00:36:02.003 "rw_mbytes_per_sec": 0, 00:36:02.003 "r_mbytes_per_sec": 0, 00:36:02.003 "w_mbytes_per_sec": 0 00:36:02.003 }, 00:36:02.003 "claimed": false, 00:36:02.003 "zoned": false, 00:36:02.003 "supported_io_types": { 00:36:02.003 "read": true, 00:36:02.003 "write": true, 00:36:02.003 "unmap": true, 00:36:02.003 "flush": true, 00:36:02.003 "reset": true, 00:36:02.003 "nvme_admin": false, 00:36:02.003 "nvme_io": false, 00:36:02.003 "nvme_io_md": false, 00:36:02.003 "write_zeroes": true, 00:36:02.003 "zcopy": true, 00:36:02.003 "get_zone_info": false, 00:36:02.003 "zone_management": false, 00:36:02.003 "zone_append": false, 00:36:02.003 "compare": false, 00:36:02.003 "compare_and_write": false, 00:36:02.003 "abort": true, 00:36:02.003 "seek_hole": false, 00:36:02.003 "seek_data": false, 00:36:02.003 "copy": true, 00:36:02.003 "nvme_iov_md": false 00:36:02.003 }, 00:36:02.003 "memory_domains": [ 00:36:02.003 { 00:36:02.003 "dma_device_id": "system", 00:36:02.003 "dma_device_type": 1 00:36:02.003 }, 00:36:02.003 { 00:36:02.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:02.003 "dma_device_type": 2 00:36:02.003 } 00:36:02.003 ], 00:36:02.003 "driver_specific": {} 00:36:02.003 } 00:36:02.003 ] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 BaseBdev3 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 [ 00:36:02.003 { 00:36:02.003 "name": "BaseBdev3", 00:36:02.003 "aliases": [ 00:36:02.003 "0b879184-dcaa-4963-9eb3-7c6458edb711" 00:36:02.003 ], 00:36:02.003 "product_name": "Malloc disk", 00:36:02.003 "block_size": 512, 00:36:02.003 "num_blocks": 65536, 00:36:02.003 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:02.003 "assigned_rate_limits": { 00:36:02.003 "rw_ios_per_sec": 0, 00:36:02.003 "rw_mbytes_per_sec": 0, 00:36:02.003 "r_mbytes_per_sec": 0, 00:36:02.003 "w_mbytes_per_sec": 0 00:36:02.003 }, 00:36:02.003 "claimed": false, 00:36:02.003 "zoned": false, 00:36:02.003 "supported_io_types": { 00:36:02.003 "read": true, 00:36:02.003 "write": true, 00:36:02.003 "unmap": true, 00:36:02.003 "flush": true, 00:36:02.003 "reset": true, 00:36:02.003 "nvme_admin": false, 00:36:02.003 "nvme_io": false, 00:36:02.003 "nvme_io_md": false, 00:36:02.003 "write_zeroes": true, 00:36:02.003 "zcopy": true, 00:36:02.003 "get_zone_info": false, 00:36:02.003 "zone_management": false, 00:36:02.003 "zone_append": false, 00:36:02.003 "compare": false, 00:36:02.003 "compare_and_write": false, 00:36:02.003 "abort": true, 00:36:02.003 "seek_hole": false, 00:36:02.003 "seek_data": false, 00:36:02.003 "copy": true, 00:36:02.003 "nvme_iov_md": false 00:36:02.003 }, 00:36:02.003 "memory_domains": [ 00:36:02.003 { 00:36:02.003 "dma_device_id": "system", 00:36:02.003 "dma_device_type": 1 00:36:02.003 }, 00:36:02.003 { 00:36:02.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:02.003 "dma_device_type": 2 00:36:02.003 } 00:36:02.003 ], 00:36:02.003 "driver_specific": {} 00:36:02.003 } 00:36:02.003 ] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 [2024-12-09 05:26:48.933598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:02.003 [2024-12-09 05:26:48.933657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:02.003 [2024-12-09 05:26:48.933690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:02.003 [2024-12-09 05:26:48.936108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.003 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.262 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:02.262 "name": "Existed_Raid", 00:36:02.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.262 "strip_size_kb": 64, 00:36:02.262 "state": "configuring", 00:36:02.262 "raid_level": "raid0", 00:36:02.262 "superblock": false, 00:36:02.262 "num_base_bdevs": 3, 00:36:02.262 "num_base_bdevs_discovered": 2, 00:36:02.262 "num_base_bdevs_operational": 3, 00:36:02.262 "base_bdevs_list": [ 00:36:02.262 { 00:36:02.262 "name": "BaseBdev1", 00:36:02.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.262 "is_configured": false, 00:36:02.262 "data_offset": 0, 00:36:02.262 "data_size": 0 00:36:02.262 }, 00:36:02.262 { 00:36:02.262 "name": "BaseBdev2", 00:36:02.262 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:02.262 "is_configured": true, 00:36:02.262 "data_offset": 0, 00:36:02.262 "data_size": 65536 00:36:02.262 }, 00:36:02.262 { 00:36:02.262 "name": "BaseBdev3", 00:36:02.262 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:02.262 "is_configured": true, 00:36:02.262 "data_offset": 0, 00:36:02.262 "data_size": 65536 00:36:02.262 } 00:36:02.262 ] 00:36:02.262 }' 00:36:02.262 05:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:02.262 05:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.521 [2024-12-09 05:26:49.461826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.521 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.781 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:02.781 "name": "Existed_Raid", 00:36:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.781 "strip_size_kb": 64, 00:36:02.781 "state": "configuring", 00:36:02.781 "raid_level": "raid0", 00:36:02.781 "superblock": false, 00:36:02.781 "num_base_bdevs": 3, 00:36:02.781 "num_base_bdevs_discovered": 1, 00:36:02.781 "num_base_bdevs_operational": 3, 00:36:02.781 "base_bdevs_list": [ 00:36:02.781 { 00:36:02.781 "name": "BaseBdev1", 00:36:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.781 "is_configured": false, 00:36:02.781 "data_offset": 0, 00:36:02.781 "data_size": 0 00:36:02.781 }, 00:36:02.781 { 00:36:02.781 "name": null, 00:36:02.781 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:02.781 "is_configured": false, 00:36:02.781 "data_offset": 0, 00:36:02.781 "data_size": 65536 00:36:02.781 }, 00:36:02.781 { 00:36:02.781 "name": "BaseBdev3", 00:36:02.781 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:02.781 "is_configured": true, 00:36:02.781 "data_offset": 0, 00:36:02.781 "data_size": 65536 00:36:02.781 } 00:36:02.781 ] 00:36:02.781 }' 00:36:02.781 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:02.781 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.039 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.039 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.039 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.039 05:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:03.039 05:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.039 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:03.039 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:03.039 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.039 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.298 [2024-12-09 05:26:50.055605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:03.298 BaseBdev1 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.298 [ 00:36:03.298 { 00:36:03.298 "name": "BaseBdev1", 00:36:03.298 "aliases": [ 00:36:03.298 "f9cf5e3b-5879-4554-9182-5e708aa6c041" 00:36:03.298 ], 00:36:03.298 "product_name": "Malloc disk", 00:36:03.298 "block_size": 512, 00:36:03.298 "num_blocks": 65536, 00:36:03.298 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:03.298 "assigned_rate_limits": { 00:36:03.298 "rw_ios_per_sec": 0, 00:36:03.298 "rw_mbytes_per_sec": 0, 00:36:03.298 "r_mbytes_per_sec": 0, 00:36:03.298 "w_mbytes_per_sec": 0 00:36:03.298 }, 00:36:03.298 "claimed": true, 00:36:03.298 "claim_type": "exclusive_write", 00:36:03.298 "zoned": false, 00:36:03.298 "supported_io_types": { 00:36:03.298 "read": true, 00:36:03.298 "write": true, 00:36:03.298 "unmap": true, 00:36:03.298 "flush": true, 00:36:03.298 "reset": true, 00:36:03.298 "nvme_admin": false, 00:36:03.298 "nvme_io": false, 00:36:03.298 "nvme_io_md": false, 00:36:03.298 "write_zeroes": true, 00:36:03.298 "zcopy": true, 00:36:03.298 "get_zone_info": false, 00:36:03.298 "zone_management": false, 00:36:03.298 "zone_append": false, 00:36:03.298 "compare": false, 00:36:03.298 "compare_and_write": false, 00:36:03.298 "abort": true, 00:36:03.298 "seek_hole": false, 00:36:03.298 "seek_data": false, 00:36:03.298 "copy": true, 00:36:03.298 "nvme_iov_md": false 00:36:03.298 }, 00:36:03.298 "memory_domains": [ 00:36:03.298 { 00:36:03.298 "dma_device_id": "system", 00:36:03.298 "dma_device_type": 1 00:36:03.298 }, 00:36:03.298 { 00:36:03.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:03.298 "dma_device_type": 2 00:36:03.298 } 00:36:03.298 ], 00:36:03.298 "driver_specific": {} 00:36:03.298 } 00:36:03.298 ] 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:03.298 "name": "Existed_Raid", 00:36:03.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.298 "strip_size_kb": 64, 00:36:03.298 "state": "configuring", 00:36:03.298 "raid_level": "raid0", 00:36:03.298 "superblock": false, 00:36:03.298 "num_base_bdevs": 3, 00:36:03.298 "num_base_bdevs_discovered": 2, 00:36:03.298 "num_base_bdevs_operational": 3, 00:36:03.298 "base_bdevs_list": [ 00:36:03.298 { 00:36:03.298 "name": "BaseBdev1", 00:36:03.298 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:03.298 "is_configured": true, 00:36:03.298 "data_offset": 0, 00:36:03.298 "data_size": 65536 00:36:03.298 }, 00:36:03.298 { 00:36:03.298 "name": null, 00:36:03.298 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:03.298 "is_configured": false, 00:36:03.298 "data_offset": 0, 00:36:03.298 "data_size": 65536 00:36:03.298 }, 00:36:03.298 { 00:36:03.298 "name": "BaseBdev3", 00:36:03.298 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:03.298 "is_configured": true, 00:36:03.298 "data_offset": 0, 00:36:03.298 "data_size": 65536 00:36:03.298 } 00:36:03.298 ] 00:36:03.298 }' 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:03.298 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.865 [2024-12-09 05:26:50.667827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:03.865 "name": "Existed_Raid", 00:36:03.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.865 "strip_size_kb": 64, 00:36:03.865 "state": "configuring", 00:36:03.865 "raid_level": "raid0", 00:36:03.865 "superblock": false, 00:36:03.865 "num_base_bdevs": 3, 00:36:03.865 "num_base_bdevs_discovered": 1, 00:36:03.865 "num_base_bdevs_operational": 3, 00:36:03.865 "base_bdevs_list": [ 00:36:03.865 { 00:36:03.865 "name": "BaseBdev1", 00:36:03.865 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:03.865 "is_configured": true, 00:36:03.865 "data_offset": 0, 00:36:03.865 "data_size": 65536 00:36:03.865 }, 00:36:03.865 { 00:36:03.865 "name": null, 00:36:03.865 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:03.865 "is_configured": false, 00:36:03.865 "data_offset": 0, 00:36:03.865 "data_size": 65536 00:36:03.865 }, 00:36:03.865 { 00:36:03.865 "name": null, 00:36:03.865 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:03.865 "is_configured": false, 00:36:03.865 "data_offset": 0, 00:36:03.865 "data_size": 65536 00:36:03.865 } 00:36:03.865 ] 00:36:03.865 }' 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:03.865 05:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.437 [2024-12-09 05:26:51.252115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:04.437 "name": "Existed_Raid", 00:36:04.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.437 "strip_size_kb": 64, 00:36:04.437 "state": "configuring", 00:36:04.437 "raid_level": "raid0", 00:36:04.437 "superblock": false, 00:36:04.437 "num_base_bdevs": 3, 00:36:04.437 "num_base_bdevs_discovered": 2, 00:36:04.437 "num_base_bdevs_operational": 3, 00:36:04.437 "base_bdevs_list": [ 00:36:04.437 { 00:36:04.437 "name": "BaseBdev1", 00:36:04.437 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:04.437 "is_configured": true, 00:36:04.437 "data_offset": 0, 00:36:04.437 "data_size": 65536 00:36:04.437 }, 00:36:04.437 { 00:36:04.437 "name": null, 00:36:04.437 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:04.437 "is_configured": false, 00:36:04.437 "data_offset": 0, 00:36:04.437 "data_size": 65536 00:36:04.437 }, 00:36:04.437 { 00:36:04.437 "name": "BaseBdev3", 00:36:04.437 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:04.437 "is_configured": true, 00:36:04.437 "data_offset": 0, 00:36:04.437 "data_size": 65536 00:36:04.437 } 00:36:04.437 ] 00:36:04.437 }' 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:04.437 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.003 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:05.003 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.003 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.004 [2024-12-09 05:26:51.860391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.004 05:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.262 05:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.262 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:05.262 "name": "Existed_Raid", 00:36:05.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.262 "strip_size_kb": 64, 00:36:05.262 "state": "configuring", 00:36:05.262 "raid_level": "raid0", 00:36:05.262 "superblock": false, 00:36:05.262 "num_base_bdevs": 3, 00:36:05.262 "num_base_bdevs_discovered": 1, 00:36:05.262 "num_base_bdevs_operational": 3, 00:36:05.262 "base_bdevs_list": [ 00:36:05.262 { 00:36:05.262 "name": null, 00:36:05.262 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:05.262 "is_configured": false, 00:36:05.262 "data_offset": 0, 00:36:05.262 "data_size": 65536 00:36:05.262 }, 00:36:05.262 { 00:36:05.262 "name": null, 00:36:05.262 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:05.262 "is_configured": false, 00:36:05.262 "data_offset": 0, 00:36:05.262 "data_size": 65536 00:36:05.262 }, 00:36:05.262 { 00:36:05.263 "name": "BaseBdev3", 00:36:05.263 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:05.263 "is_configured": true, 00:36:05.263 "data_offset": 0, 00:36:05.263 "data_size": 65536 00:36:05.263 } 00:36:05.263 ] 00:36:05.263 }' 00:36:05.263 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:05.263 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.829 [2024-12-09 05:26:52.552491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:05.829 "name": "Existed_Raid", 00:36:05.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.829 "strip_size_kb": 64, 00:36:05.829 "state": "configuring", 00:36:05.829 "raid_level": "raid0", 00:36:05.829 "superblock": false, 00:36:05.829 "num_base_bdevs": 3, 00:36:05.829 "num_base_bdevs_discovered": 2, 00:36:05.829 "num_base_bdevs_operational": 3, 00:36:05.829 "base_bdevs_list": [ 00:36:05.829 { 00:36:05.829 "name": null, 00:36:05.829 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:05.829 "is_configured": false, 00:36:05.829 "data_offset": 0, 00:36:05.829 "data_size": 65536 00:36:05.829 }, 00:36:05.829 { 00:36:05.829 "name": "BaseBdev2", 00:36:05.829 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:05.829 "is_configured": true, 00:36:05.829 "data_offset": 0, 00:36:05.829 "data_size": 65536 00:36:05.829 }, 00:36:05.829 { 00:36:05.829 "name": "BaseBdev3", 00:36:05.829 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:05.829 "is_configured": true, 00:36:05.829 "data_offset": 0, 00:36:05.829 "data_size": 65536 00:36:05.829 } 00:36:05.829 ] 00:36:05.829 }' 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:05.829 05:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f9cf5e3b-5879-4554-9182-5e708aa6c041 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 [2024-12-09 05:26:53.224011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:06.396 [2024-12-09 05:26:53.224331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:06.396 [2024-12-09 05:26:53.224379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:06.396 [2024-12-09 05:26:53.224797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:06.396 [2024-12-09 05:26:53.225041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:06.396 [2024-12-09 05:26:53.225073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:06.396 [2024-12-09 05:26:53.225402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:06.396 NewBaseBdev 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 [ 00:36:06.396 { 00:36:06.396 "name": "NewBaseBdev", 00:36:06.396 "aliases": [ 00:36:06.396 "f9cf5e3b-5879-4554-9182-5e708aa6c041" 00:36:06.396 ], 00:36:06.396 "product_name": "Malloc disk", 00:36:06.396 "block_size": 512, 00:36:06.396 "num_blocks": 65536, 00:36:06.396 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:06.396 "assigned_rate_limits": { 00:36:06.396 "rw_ios_per_sec": 0, 00:36:06.396 "rw_mbytes_per_sec": 0, 00:36:06.396 "r_mbytes_per_sec": 0, 00:36:06.396 "w_mbytes_per_sec": 0 00:36:06.396 }, 00:36:06.396 "claimed": true, 00:36:06.396 "claim_type": "exclusive_write", 00:36:06.396 "zoned": false, 00:36:06.396 "supported_io_types": { 00:36:06.396 "read": true, 00:36:06.396 "write": true, 00:36:06.396 "unmap": true, 00:36:06.396 "flush": true, 00:36:06.396 "reset": true, 00:36:06.396 "nvme_admin": false, 00:36:06.396 "nvme_io": false, 00:36:06.396 "nvme_io_md": false, 00:36:06.396 "write_zeroes": true, 00:36:06.396 "zcopy": true, 00:36:06.396 "get_zone_info": false, 00:36:06.396 "zone_management": false, 00:36:06.396 "zone_append": false, 00:36:06.396 "compare": false, 00:36:06.396 "compare_and_write": false, 00:36:06.396 "abort": true, 00:36:06.396 "seek_hole": false, 00:36:06.396 "seek_data": false, 00:36:06.396 "copy": true, 00:36:06.396 "nvme_iov_md": false 00:36:06.396 }, 00:36:06.396 "memory_domains": [ 00:36:06.396 { 00:36:06.396 "dma_device_id": "system", 00:36:06.396 "dma_device_type": 1 00:36:06.396 }, 00:36:06.396 { 00:36:06.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.396 "dma_device_type": 2 00:36:06.396 } 00:36:06.396 ], 00:36:06.396 "driver_specific": {} 00:36:06.396 } 00:36:06.396 ] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.396 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:06.396 "name": "Existed_Raid", 00:36:06.396 "uuid": "436c8b1f-03b4-40d4-93bd-2d8d0fe3f845", 00:36:06.396 "strip_size_kb": 64, 00:36:06.396 "state": "online", 00:36:06.396 "raid_level": "raid0", 00:36:06.396 "superblock": false, 00:36:06.396 "num_base_bdevs": 3, 00:36:06.396 "num_base_bdevs_discovered": 3, 00:36:06.396 "num_base_bdevs_operational": 3, 00:36:06.396 "base_bdevs_list": [ 00:36:06.396 { 00:36:06.396 "name": "NewBaseBdev", 00:36:06.396 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:06.396 "is_configured": true, 00:36:06.396 "data_offset": 0, 00:36:06.396 "data_size": 65536 00:36:06.396 }, 00:36:06.396 { 00:36:06.396 "name": "BaseBdev2", 00:36:06.396 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:06.396 "is_configured": true, 00:36:06.396 "data_offset": 0, 00:36:06.396 "data_size": 65536 00:36:06.396 }, 00:36:06.396 { 00:36:06.396 "name": "BaseBdev3", 00:36:06.397 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:06.397 "is_configured": true, 00:36:06.397 "data_offset": 0, 00:36:06.397 "data_size": 65536 00:36:06.397 } 00:36:06.397 ] 00:36:06.397 }' 00:36:06.397 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:06.397 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.962 [2024-12-09 05:26:53.804717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.962 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:06.962 "name": "Existed_Raid", 00:36:06.962 "aliases": [ 00:36:06.962 "436c8b1f-03b4-40d4-93bd-2d8d0fe3f845" 00:36:06.962 ], 00:36:06.962 "product_name": "Raid Volume", 00:36:06.962 "block_size": 512, 00:36:06.962 "num_blocks": 196608, 00:36:06.962 "uuid": "436c8b1f-03b4-40d4-93bd-2d8d0fe3f845", 00:36:06.962 "assigned_rate_limits": { 00:36:06.962 "rw_ios_per_sec": 0, 00:36:06.962 "rw_mbytes_per_sec": 0, 00:36:06.962 "r_mbytes_per_sec": 0, 00:36:06.962 "w_mbytes_per_sec": 0 00:36:06.962 }, 00:36:06.962 "claimed": false, 00:36:06.962 "zoned": false, 00:36:06.962 "supported_io_types": { 00:36:06.962 "read": true, 00:36:06.962 "write": true, 00:36:06.962 "unmap": true, 00:36:06.962 "flush": true, 00:36:06.962 "reset": true, 00:36:06.962 "nvme_admin": false, 00:36:06.962 "nvme_io": false, 00:36:06.962 "nvme_io_md": false, 00:36:06.962 "write_zeroes": true, 00:36:06.962 "zcopy": false, 00:36:06.962 "get_zone_info": false, 00:36:06.962 "zone_management": false, 00:36:06.962 "zone_append": false, 00:36:06.962 "compare": false, 00:36:06.962 "compare_and_write": false, 00:36:06.962 "abort": false, 00:36:06.963 "seek_hole": false, 00:36:06.963 "seek_data": false, 00:36:06.963 "copy": false, 00:36:06.963 "nvme_iov_md": false 00:36:06.963 }, 00:36:06.963 "memory_domains": [ 00:36:06.963 { 00:36:06.963 "dma_device_id": "system", 00:36:06.963 "dma_device_type": 1 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.963 "dma_device_type": 2 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "dma_device_id": "system", 00:36:06.963 "dma_device_type": 1 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.963 "dma_device_type": 2 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "dma_device_id": "system", 00:36:06.963 "dma_device_type": 1 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.963 "dma_device_type": 2 00:36:06.963 } 00:36:06.963 ], 00:36:06.963 "driver_specific": { 00:36:06.963 "raid": { 00:36:06.963 "uuid": "436c8b1f-03b4-40d4-93bd-2d8d0fe3f845", 00:36:06.963 "strip_size_kb": 64, 00:36:06.963 "state": "online", 00:36:06.963 "raid_level": "raid0", 00:36:06.963 "superblock": false, 00:36:06.963 "num_base_bdevs": 3, 00:36:06.963 "num_base_bdevs_discovered": 3, 00:36:06.963 "num_base_bdevs_operational": 3, 00:36:06.963 "base_bdevs_list": [ 00:36:06.963 { 00:36:06.963 "name": "NewBaseBdev", 00:36:06.963 "uuid": "f9cf5e3b-5879-4554-9182-5e708aa6c041", 00:36:06.963 "is_configured": true, 00:36:06.963 "data_offset": 0, 00:36:06.963 "data_size": 65536 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "name": "BaseBdev2", 00:36:06.963 "uuid": "0f1c3b61-a02b-481f-84c2-69a7c621d2d3", 00:36:06.963 "is_configured": true, 00:36:06.963 "data_offset": 0, 00:36:06.963 "data_size": 65536 00:36:06.963 }, 00:36:06.963 { 00:36:06.963 "name": "BaseBdev3", 00:36:06.963 "uuid": "0b879184-dcaa-4963-9eb3-7c6458edb711", 00:36:06.963 "is_configured": true, 00:36:06.963 "data_offset": 0, 00:36:06.963 "data_size": 65536 00:36:06.963 } 00:36:06.963 ] 00:36:06.963 } 00:36:06.963 } 00:36:06.963 }' 00:36:06.963 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:06.963 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:06.963 BaseBdev2 00:36:06.963 BaseBdev3' 00:36:06.963 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.221 05:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.221 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.221 [2024-12-09 05:26:54.112383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:07.221 [2024-12-09 05:26:54.112422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:07.222 [2024-12-09 05:26:54.112546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:07.222 [2024-12-09 05:26:54.112649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:07.222 [2024-12-09 05:26:54.112677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63853 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63853 ']' 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63853 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63853 00:36:07.222 killing process with pid 63853 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63853' 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63853 00:36:07.222 [2024-12-09 05:26:54.157476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:07.222 05:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63853 00:36:07.479 [2024-12-09 05:26:54.435494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:36:08.854 00:36:08.854 real 0m11.976s 00:36:08.854 user 0m19.510s 00:36:08.854 sys 0m1.833s 00:36:08.854 ************************************ 00:36:08.854 END TEST raid_state_function_test 00:36:08.854 ************************************ 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.854 05:26:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:36:08.854 05:26:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:08.854 05:26:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.854 05:26:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:08.854 ************************************ 00:36:08.854 START TEST raid_state_function_test_sb 00:36:08.854 ************************************ 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:08.854 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:08.855 Process raid pid: 64485 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64485 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64485' 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64485 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64485 ']' 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.855 05:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.113 [2024-12-09 05:26:55.827977] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:09.113 [2024-12-09 05:26:55.828162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.113 [2024-12-09 05:26:56.026109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.371 [2024-12-09 05:26:56.196119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.630 [2024-12-09 05:26:56.425585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:09.630 [2024-12-09 05:26:56.425644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.887 [2024-12-09 05:26:56.792308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:09.887 [2024-12-09 05:26:56.792388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:09.887 [2024-12-09 05:26:56.792405] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:09.887 [2024-12-09 05:26:56.792420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:09.887 [2024-12-09 05:26:56.792428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:09.887 [2024-12-09 05:26:56.792441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:09.887 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.888 "name": "Existed_Raid", 00:36:09.888 "uuid": "64f82f5a-87a6-467a-9a90-7b70af024d9c", 00:36:09.888 "strip_size_kb": 64, 00:36:09.888 "state": "configuring", 00:36:09.888 "raid_level": "raid0", 00:36:09.888 "superblock": true, 00:36:09.888 "num_base_bdevs": 3, 00:36:09.888 "num_base_bdevs_discovered": 0, 00:36:09.888 "num_base_bdevs_operational": 3, 00:36:09.888 "base_bdevs_list": [ 00:36:09.888 { 00:36:09.888 "name": "BaseBdev1", 00:36:09.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.888 "is_configured": false, 00:36:09.888 "data_offset": 0, 00:36:09.888 "data_size": 0 00:36:09.888 }, 00:36:09.888 { 00:36:09.888 "name": "BaseBdev2", 00:36:09.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.888 "is_configured": false, 00:36:09.888 "data_offset": 0, 00:36:09.888 "data_size": 0 00:36:09.888 }, 00:36:09.888 { 00:36:09.888 "name": "BaseBdev3", 00:36:09.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.888 "is_configured": false, 00:36:09.888 "data_offset": 0, 00:36:09.888 "data_size": 0 00:36:09.888 } 00:36:09.888 ] 00:36:09.888 }' 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.888 05:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.454 [2024-12-09 05:26:57.316501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:10.454 [2024-12-09 05:26:57.316547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.454 [2024-12-09 05:26:57.324515] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:10.454 [2024-12-09 05:26:57.324602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:10.454 [2024-12-09 05:26:57.324634] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:10.454 [2024-12-09 05:26:57.324650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:10.454 [2024-12-09 05:26:57.324660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:10.454 [2024-12-09 05:26:57.324674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.454 [2024-12-09 05:26:57.374603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:10.454 BaseBdev1 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.454 [ 00:36:10.454 { 00:36:10.454 "name": "BaseBdev1", 00:36:10.454 "aliases": [ 00:36:10.454 "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05" 00:36:10.454 ], 00:36:10.454 "product_name": "Malloc disk", 00:36:10.454 "block_size": 512, 00:36:10.454 "num_blocks": 65536, 00:36:10.454 "uuid": "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05", 00:36:10.454 "assigned_rate_limits": { 00:36:10.454 "rw_ios_per_sec": 0, 00:36:10.454 "rw_mbytes_per_sec": 0, 00:36:10.454 "r_mbytes_per_sec": 0, 00:36:10.454 "w_mbytes_per_sec": 0 00:36:10.454 }, 00:36:10.454 "claimed": true, 00:36:10.454 "claim_type": "exclusive_write", 00:36:10.454 "zoned": false, 00:36:10.454 "supported_io_types": { 00:36:10.454 "read": true, 00:36:10.454 "write": true, 00:36:10.454 "unmap": true, 00:36:10.454 "flush": true, 00:36:10.454 "reset": true, 00:36:10.454 "nvme_admin": false, 00:36:10.454 "nvme_io": false, 00:36:10.454 "nvme_io_md": false, 00:36:10.454 "write_zeroes": true, 00:36:10.454 "zcopy": true, 00:36:10.454 "get_zone_info": false, 00:36:10.454 "zone_management": false, 00:36:10.454 "zone_append": false, 00:36:10.454 "compare": false, 00:36:10.454 "compare_and_write": false, 00:36:10.454 "abort": true, 00:36:10.454 "seek_hole": false, 00:36:10.454 "seek_data": false, 00:36:10.454 "copy": true, 00:36:10.454 "nvme_iov_md": false 00:36:10.454 }, 00:36:10.454 "memory_domains": [ 00:36:10.454 { 00:36:10.454 "dma_device_id": "system", 00:36:10.454 "dma_device_type": 1 00:36:10.454 }, 00:36:10.454 { 00:36:10.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:10.454 "dma_device_type": 2 00:36:10.454 } 00:36:10.454 ], 00:36:10.454 "driver_specific": {} 00:36:10.454 } 00:36:10.454 ] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.454 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.712 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.712 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:10.712 "name": "Existed_Raid", 00:36:10.712 "uuid": "c255e023-1693-4642-b10b-13837085fad8", 00:36:10.712 "strip_size_kb": 64, 00:36:10.712 "state": "configuring", 00:36:10.712 "raid_level": "raid0", 00:36:10.712 "superblock": true, 00:36:10.712 "num_base_bdevs": 3, 00:36:10.712 "num_base_bdevs_discovered": 1, 00:36:10.712 "num_base_bdevs_operational": 3, 00:36:10.712 "base_bdevs_list": [ 00:36:10.712 { 00:36:10.712 "name": "BaseBdev1", 00:36:10.712 "uuid": "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05", 00:36:10.712 "is_configured": true, 00:36:10.712 "data_offset": 2048, 00:36:10.712 "data_size": 63488 00:36:10.712 }, 00:36:10.712 { 00:36:10.712 "name": "BaseBdev2", 00:36:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.712 "is_configured": false, 00:36:10.712 "data_offset": 0, 00:36:10.712 "data_size": 0 00:36:10.712 }, 00:36:10.712 { 00:36:10.712 "name": "BaseBdev3", 00:36:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.712 "is_configured": false, 00:36:10.712 "data_offset": 0, 00:36:10.712 "data_size": 0 00:36:10.712 } 00:36:10.712 ] 00:36:10.712 }' 00:36:10.712 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:10.712 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.971 [2024-12-09 05:26:57.922769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:10.971 [2024-12-09 05:26:57.923069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.971 [2024-12-09 05:26:57.930844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:10.971 [2024-12-09 05:26:57.933334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:10.971 [2024-12-09 05:26:57.933540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:10.971 [2024-12-09 05:26:57.933568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:10.971 [2024-12-09 05:26:57.933585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.971 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.237 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.237 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:11.237 "name": "Existed_Raid", 00:36:11.237 "uuid": "e0b04e80-ad98-4bf1-854c-4cbeb811956a", 00:36:11.237 "strip_size_kb": 64, 00:36:11.237 "state": "configuring", 00:36:11.237 "raid_level": "raid0", 00:36:11.237 "superblock": true, 00:36:11.237 "num_base_bdevs": 3, 00:36:11.237 "num_base_bdevs_discovered": 1, 00:36:11.237 "num_base_bdevs_operational": 3, 00:36:11.237 "base_bdevs_list": [ 00:36:11.237 { 00:36:11.237 "name": "BaseBdev1", 00:36:11.237 "uuid": "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05", 00:36:11.237 "is_configured": true, 00:36:11.237 "data_offset": 2048, 00:36:11.237 "data_size": 63488 00:36:11.237 }, 00:36:11.237 { 00:36:11.237 "name": "BaseBdev2", 00:36:11.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.237 "is_configured": false, 00:36:11.237 "data_offset": 0, 00:36:11.237 "data_size": 0 00:36:11.237 }, 00:36:11.237 { 00:36:11.237 "name": "BaseBdev3", 00:36:11.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.237 "is_configured": false, 00:36:11.237 "data_offset": 0, 00:36:11.237 "data_size": 0 00:36:11.237 } 00:36:11.237 ] 00:36:11.237 }' 00:36:11.237 05:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:11.237 05:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.522 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:11.522 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.522 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.779 [2024-12-09 05:26:58.505231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:11.779 BaseBdev2 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.779 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.779 [ 00:36:11.779 { 00:36:11.779 "name": "BaseBdev2", 00:36:11.779 "aliases": [ 00:36:11.779 "e0fd4ce2-b3cf-402c-a092-e64ffdc72b0f" 00:36:11.779 ], 00:36:11.779 "product_name": "Malloc disk", 00:36:11.779 "block_size": 512, 00:36:11.779 "num_blocks": 65536, 00:36:11.779 "uuid": "e0fd4ce2-b3cf-402c-a092-e64ffdc72b0f", 00:36:11.779 "assigned_rate_limits": { 00:36:11.779 "rw_ios_per_sec": 0, 00:36:11.780 "rw_mbytes_per_sec": 0, 00:36:11.780 "r_mbytes_per_sec": 0, 00:36:11.780 "w_mbytes_per_sec": 0 00:36:11.780 }, 00:36:11.780 "claimed": true, 00:36:11.780 "claim_type": "exclusive_write", 00:36:11.780 "zoned": false, 00:36:11.780 "supported_io_types": { 00:36:11.780 "read": true, 00:36:11.780 "write": true, 00:36:11.780 "unmap": true, 00:36:11.780 "flush": true, 00:36:11.780 "reset": true, 00:36:11.780 "nvme_admin": false, 00:36:11.780 "nvme_io": false, 00:36:11.780 "nvme_io_md": false, 00:36:11.780 "write_zeroes": true, 00:36:11.780 "zcopy": true, 00:36:11.780 "get_zone_info": false, 00:36:11.780 "zone_management": false, 00:36:11.780 "zone_append": false, 00:36:11.780 "compare": false, 00:36:11.780 "compare_and_write": false, 00:36:11.780 "abort": true, 00:36:11.780 "seek_hole": false, 00:36:11.780 "seek_data": false, 00:36:11.780 "copy": true, 00:36:11.780 "nvme_iov_md": false 00:36:11.780 }, 00:36:11.780 "memory_domains": [ 00:36:11.780 { 00:36:11.780 "dma_device_id": "system", 00:36:11.780 "dma_device_type": 1 00:36:11.780 }, 00:36:11.780 { 00:36:11.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:11.780 "dma_device_type": 2 00:36:11.780 } 00:36:11.780 ], 00:36:11.780 "driver_specific": {} 00:36:11.780 } 00:36:11.780 ] 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:11.780 "name": "Existed_Raid", 00:36:11.780 "uuid": "e0b04e80-ad98-4bf1-854c-4cbeb811956a", 00:36:11.780 "strip_size_kb": 64, 00:36:11.780 "state": "configuring", 00:36:11.780 "raid_level": "raid0", 00:36:11.780 "superblock": true, 00:36:11.780 "num_base_bdevs": 3, 00:36:11.780 "num_base_bdevs_discovered": 2, 00:36:11.780 "num_base_bdevs_operational": 3, 00:36:11.780 "base_bdevs_list": [ 00:36:11.780 { 00:36:11.780 "name": "BaseBdev1", 00:36:11.780 "uuid": "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05", 00:36:11.780 "is_configured": true, 00:36:11.780 "data_offset": 2048, 00:36:11.780 "data_size": 63488 00:36:11.780 }, 00:36:11.780 { 00:36:11.780 "name": "BaseBdev2", 00:36:11.780 "uuid": "e0fd4ce2-b3cf-402c-a092-e64ffdc72b0f", 00:36:11.780 "is_configured": true, 00:36:11.780 "data_offset": 2048, 00:36:11.780 "data_size": 63488 00:36:11.780 }, 00:36:11.780 { 00:36:11.780 "name": "BaseBdev3", 00:36:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.780 "is_configured": false, 00:36:11.780 "data_offset": 0, 00:36:11.780 "data_size": 0 00:36:11.780 } 00:36:11.780 ] 00:36:11.780 }' 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:11.780 05:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.344 [2024-12-09 05:26:59.124252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:12.344 [2024-12-09 05:26:59.124586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:12.344 [2024-12-09 05:26:59.124632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:12.344 BaseBdev3 00:36:12.344 [2024-12-09 05:26:59.124998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:12.344 [2024-12-09 05:26:59.125202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:12.344 [2024-12-09 05:26:59.125383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:12.344 [2024-12-09 05:26:59.125608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.344 [ 00:36:12.344 { 00:36:12.344 "name": "BaseBdev3", 00:36:12.344 "aliases": [ 00:36:12.344 "52aba42a-f7c0-4e4a-b2a2-cf2531eb67a8" 00:36:12.344 ], 00:36:12.344 "product_name": "Malloc disk", 00:36:12.344 "block_size": 512, 00:36:12.344 "num_blocks": 65536, 00:36:12.344 "uuid": "52aba42a-f7c0-4e4a-b2a2-cf2531eb67a8", 00:36:12.344 "assigned_rate_limits": { 00:36:12.344 "rw_ios_per_sec": 0, 00:36:12.344 "rw_mbytes_per_sec": 0, 00:36:12.344 "r_mbytes_per_sec": 0, 00:36:12.344 "w_mbytes_per_sec": 0 00:36:12.344 }, 00:36:12.344 "claimed": true, 00:36:12.344 "claim_type": "exclusive_write", 00:36:12.344 "zoned": false, 00:36:12.344 "supported_io_types": { 00:36:12.344 "read": true, 00:36:12.344 "write": true, 00:36:12.344 "unmap": true, 00:36:12.344 "flush": true, 00:36:12.344 "reset": true, 00:36:12.344 "nvme_admin": false, 00:36:12.344 "nvme_io": false, 00:36:12.344 "nvme_io_md": false, 00:36:12.344 "write_zeroes": true, 00:36:12.344 "zcopy": true, 00:36:12.344 "get_zone_info": false, 00:36:12.344 "zone_management": false, 00:36:12.344 "zone_append": false, 00:36:12.344 "compare": false, 00:36:12.344 "compare_and_write": false, 00:36:12.344 "abort": true, 00:36:12.344 "seek_hole": false, 00:36:12.344 "seek_data": false, 00:36:12.344 "copy": true, 00:36:12.344 "nvme_iov_md": false 00:36:12.344 }, 00:36:12.344 "memory_domains": [ 00:36:12.344 { 00:36:12.344 "dma_device_id": "system", 00:36:12.344 "dma_device_type": 1 00:36:12.344 }, 00:36:12.344 { 00:36:12.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:12.344 "dma_device_type": 2 00:36:12.344 } 00:36:12.344 ], 00:36:12.344 "driver_specific": {} 00:36:12.344 } 00:36:12.344 ] 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:12.344 "name": "Existed_Raid", 00:36:12.344 "uuid": "e0b04e80-ad98-4bf1-854c-4cbeb811956a", 00:36:12.344 "strip_size_kb": 64, 00:36:12.344 "state": "online", 00:36:12.344 "raid_level": "raid0", 00:36:12.344 "superblock": true, 00:36:12.344 "num_base_bdevs": 3, 00:36:12.344 "num_base_bdevs_discovered": 3, 00:36:12.344 "num_base_bdevs_operational": 3, 00:36:12.344 "base_bdevs_list": [ 00:36:12.344 { 00:36:12.344 "name": "BaseBdev1", 00:36:12.344 "uuid": "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05", 00:36:12.344 "is_configured": true, 00:36:12.344 "data_offset": 2048, 00:36:12.344 "data_size": 63488 00:36:12.344 }, 00:36:12.344 { 00:36:12.344 "name": "BaseBdev2", 00:36:12.344 "uuid": "e0fd4ce2-b3cf-402c-a092-e64ffdc72b0f", 00:36:12.344 "is_configured": true, 00:36:12.344 "data_offset": 2048, 00:36:12.344 "data_size": 63488 00:36:12.344 }, 00:36:12.344 { 00:36:12.344 "name": "BaseBdev3", 00:36:12.344 "uuid": "52aba42a-f7c0-4e4a-b2a2-cf2531eb67a8", 00:36:12.344 "is_configured": true, 00:36:12.344 "data_offset": 2048, 00:36:12.344 "data_size": 63488 00:36:12.344 } 00:36:12.344 ] 00:36:12.344 }' 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:12.344 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.909 [2024-12-09 05:26:59.688889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:12.909 "name": "Existed_Raid", 00:36:12.909 "aliases": [ 00:36:12.909 "e0b04e80-ad98-4bf1-854c-4cbeb811956a" 00:36:12.909 ], 00:36:12.909 "product_name": "Raid Volume", 00:36:12.909 "block_size": 512, 00:36:12.909 "num_blocks": 190464, 00:36:12.909 "uuid": "e0b04e80-ad98-4bf1-854c-4cbeb811956a", 00:36:12.909 "assigned_rate_limits": { 00:36:12.909 "rw_ios_per_sec": 0, 00:36:12.909 "rw_mbytes_per_sec": 0, 00:36:12.909 "r_mbytes_per_sec": 0, 00:36:12.909 "w_mbytes_per_sec": 0 00:36:12.909 }, 00:36:12.909 "claimed": false, 00:36:12.909 "zoned": false, 00:36:12.909 "supported_io_types": { 00:36:12.909 "read": true, 00:36:12.909 "write": true, 00:36:12.909 "unmap": true, 00:36:12.909 "flush": true, 00:36:12.909 "reset": true, 00:36:12.909 "nvme_admin": false, 00:36:12.909 "nvme_io": false, 00:36:12.909 "nvme_io_md": false, 00:36:12.909 "write_zeroes": true, 00:36:12.909 "zcopy": false, 00:36:12.909 "get_zone_info": false, 00:36:12.909 "zone_management": false, 00:36:12.909 "zone_append": false, 00:36:12.909 "compare": false, 00:36:12.909 "compare_and_write": false, 00:36:12.909 "abort": false, 00:36:12.909 "seek_hole": false, 00:36:12.909 "seek_data": false, 00:36:12.909 "copy": false, 00:36:12.909 "nvme_iov_md": false 00:36:12.909 }, 00:36:12.909 "memory_domains": [ 00:36:12.909 { 00:36:12.909 "dma_device_id": "system", 00:36:12.909 "dma_device_type": 1 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:12.909 "dma_device_type": 2 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "dma_device_id": "system", 00:36:12.909 "dma_device_type": 1 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:12.909 "dma_device_type": 2 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "dma_device_id": "system", 00:36:12.909 "dma_device_type": 1 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:12.909 "dma_device_type": 2 00:36:12.909 } 00:36:12.909 ], 00:36:12.909 "driver_specific": { 00:36:12.909 "raid": { 00:36:12.909 "uuid": "e0b04e80-ad98-4bf1-854c-4cbeb811956a", 00:36:12.909 "strip_size_kb": 64, 00:36:12.909 "state": "online", 00:36:12.909 "raid_level": "raid0", 00:36:12.909 "superblock": true, 00:36:12.909 "num_base_bdevs": 3, 00:36:12.909 "num_base_bdevs_discovered": 3, 00:36:12.909 "num_base_bdevs_operational": 3, 00:36:12.909 "base_bdevs_list": [ 00:36:12.909 { 00:36:12.909 "name": "BaseBdev1", 00:36:12.909 "uuid": "de47c3d7-9cec-4dcd-91c3-4ab83beb1c05", 00:36:12.909 "is_configured": true, 00:36:12.909 "data_offset": 2048, 00:36:12.909 "data_size": 63488 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "name": "BaseBdev2", 00:36:12.909 "uuid": "e0fd4ce2-b3cf-402c-a092-e64ffdc72b0f", 00:36:12.909 "is_configured": true, 00:36:12.909 "data_offset": 2048, 00:36:12.909 "data_size": 63488 00:36:12.909 }, 00:36:12.909 { 00:36:12.909 "name": "BaseBdev3", 00:36:12.909 "uuid": "52aba42a-f7c0-4e4a-b2a2-cf2531eb67a8", 00:36:12.909 "is_configured": true, 00:36:12.909 "data_offset": 2048, 00:36:12.909 "data_size": 63488 00:36:12.909 } 00:36:12.909 ] 00:36:12.909 } 00:36:12.909 } 00:36:12.909 }' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:12.909 BaseBdev2 00:36:12.909 BaseBdev3' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.909 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.167 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.167 [2024-12-09 05:27:00.008589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:13.167 [2024-12-09 05:27:00.008625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:13.167 [2024-12-09 05:27:00.008698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:13.167 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.424 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:13.424 "name": "Existed_Raid", 00:36:13.424 "uuid": "e0b04e80-ad98-4bf1-854c-4cbeb811956a", 00:36:13.424 "strip_size_kb": 64, 00:36:13.424 "state": "offline", 00:36:13.424 "raid_level": "raid0", 00:36:13.424 "superblock": true, 00:36:13.424 "num_base_bdevs": 3, 00:36:13.424 "num_base_bdevs_discovered": 2, 00:36:13.424 "num_base_bdevs_operational": 2, 00:36:13.424 "base_bdevs_list": [ 00:36:13.424 { 00:36:13.424 "name": null, 00:36:13.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.424 "is_configured": false, 00:36:13.424 "data_offset": 0, 00:36:13.424 "data_size": 63488 00:36:13.424 }, 00:36:13.424 { 00:36:13.424 "name": "BaseBdev2", 00:36:13.424 "uuid": "e0fd4ce2-b3cf-402c-a092-e64ffdc72b0f", 00:36:13.424 "is_configured": true, 00:36:13.424 "data_offset": 2048, 00:36:13.424 "data_size": 63488 00:36:13.424 }, 00:36:13.424 { 00:36:13.424 "name": "BaseBdev3", 00:36:13.424 "uuid": "52aba42a-f7c0-4e4a-b2a2-cf2531eb67a8", 00:36:13.424 "is_configured": true, 00:36:13.424 "data_offset": 2048, 00:36:13.424 "data_size": 63488 00:36:13.424 } 00:36:13.424 ] 00:36:13.424 }' 00:36:13.424 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:13.424 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.681 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.681 [2024-12-09 05:27:00.650677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:13.938 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.938 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.939 [2024-12-09 05:27:00.800615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:13.939 [2024-12-09 05:27:00.800677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.939 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 BaseBdev2 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 [ 00:36:14.197 { 00:36:14.197 "name": "BaseBdev2", 00:36:14.197 "aliases": [ 00:36:14.197 "39d5f85d-4e7c-42a2-aa05-4a66779016c5" 00:36:14.197 ], 00:36:14.197 "product_name": "Malloc disk", 00:36:14.197 "block_size": 512, 00:36:14.197 "num_blocks": 65536, 00:36:14.197 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:14.197 "assigned_rate_limits": { 00:36:14.197 "rw_ios_per_sec": 0, 00:36:14.197 "rw_mbytes_per_sec": 0, 00:36:14.197 "r_mbytes_per_sec": 0, 00:36:14.197 "w_mbytes_per_sec": 0 00:36:14.197 }, 00:36:14.197 "claimed": false, 00:36:14.197 "zoned": false, 00:36:14.197 "supported_io_types": { 00:36:14.197 "read": true, 00:36:14.197 "write": true, 00:36:14.197 "unmap": true, 00:36:14.197 "flush": true, 00:36:14.197 "reset": true, 00:36:14.197 "nvme_admin": false, 00:36:14.197 "nvme_io": false, 00:36:14.197 "nvme_io_md": false, 00:36:14.197 "write_zeroes": true, 00:36:14.197 "zcopy": true, 00:36:14.197 "get_zone_info": false, 00:36:14.197 "zone_management": false, 00:36:14.197 "zone_append": false, 00:36:14.197 "compare": false, 00:36:14.197 "compare_and_write": false, 00:36:14.197 "abort": true, 00:36:14.197 "seek_hole": false, 00:36:14.197 "seek_data": false, 00:36:14.197 "copy": true, 00:36:14.197 "nvme_iov_md": false 00:36:14.197 }, 00:36:14.197 "memory_domains": [ 00:36:14.197 { 00:36:14.197 "dma_device_id": "system", 00:36:14.197 "dma_device_type": 1 00:36:14.197 }, 00:36:14.197 { 00:36:14.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.197 "dma_device_type": 2 00:36:14.197 } 00:36:14.197 ], 00:36:14.197 "driver_specific": {} 00:36:14.197 } 00:36:14.197 ] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 BaseBdev3 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 [ 00:36:14.197 { 00:36:14.197 "name": "BaseBdev3", 00:36:14.197 "aliases": [ 00:36:14.197 "ee82e3b5-f938-4067-92a6-396f7dc46b5f" 00:36:14.197 ], 00:36:14.197 "product_name": "Malloc disk", 00:36:14.197 "block_size": 512, 00:36:14.197 "num_blocks": 65536, 00:36:14.197 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:14.197 "assigned_rate_limits": { 00:36:14.197 "rw_ios_per_sec": 0, 00:36:14.197 "rw_mbytes_per_sec": 0, 00:36:14.197 "r_mbytes_per_sec": 0, 00:36:14.197 "w_mbytes_per_sec": 0 00:36:14.197 }, 00:36:14.197 "claimed": false, 00:36:14.197 "zoned": false, 00:36:14.197 "supported_io_types": { 00:36:14.197 "read": true, 00:36:14.197 "write": true, 00:36:14.197 "unmap": true, 00:36:14.197 "flush": true, 00:36:14.197 "reset": true, 00:36:14.197 "nvme_admin": false, 00:36:14.197 "nvme_io": false, 00:36:14.197 "nvme_io_md": false, 00:36:14.197 "write_zeroes": true, 00:36:14.197 "zcopy": true, 00:36:14.197 "get_zone_info": false, 00:36:14.197 "zone_management": false, 00:36:14.197 "zone_append": false, 00:36:14.197 "compare": false, 00:36:14.197 "compare_and_write": false, 00:36:14.197 "abort": true, 00:36:14.197 "seek_hole": false, 00:36:14.197 "seek_data": false, 00:36:14.197 "copy": true, 00:36:14.197 "nvme_iov_md": false 00:36:14.197 }, 00:36:14.197 "memory_domains": [ 00:36:14.197 { 00:36:14.197 "dma_device_id": "system", 00:36:14.197 "dma_device_type": 1 00:36:14.197 }, 00:36:14.197 { 00:36:14.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.197 "dma_device_type": 2 00:36:14.197 } 00:36:14.197 ], 00:36:14.197 "driver_specific": {} 00:36:14.197 } 00:36:14.197 ] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.197 [2024-12-09 05:27:01.102705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:14.197 [2024-12-09 05:27:01.102773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:14.197 [2024-12-09 05:27:01.102807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:14.197 [2024-12-09 05:27:01.105227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:14.197 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.198 "name": "Existed_Raid", 00:36:14.198 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:14.198 "strip_size_kb": 64, 00:36:14.198 "state": "configuring", 00:36:14.198 "raid_level": "raid0", 00:36:14.198 "superblock": true, 00:36:14.198 "num_base_bdevs": 3, 00:36:14.198 "num_base_bdevs_discovered": 2, 00:36:14.198 "num_base_bdevs_operational": 3, 00:36:14.198 "base_bdevs_list": [ 00:36:14.198 { 00:36:14.198 "name": "BaseBdev1", 00:36:14.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.198 "is_configured": false, 00:36:14.198 "data_offset": 0, 00:36:14.198 "data_size": 0 00:36:14.198 }, 00:36:14.198 { 00:36:14.198 "name": "BaseBdev2", 00:36:14.198 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:14.198 "is_configured": true, 00:36:14.198 "data_offset": 2048, 00:36:14.198 "data_size": 63488 00:36:14.198 }, 00:36:14.198 { 00:36:14.198 "name": "BaseBdev3", 00:36:14.198 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:14.198 "is_configured": true, 00:36:14.198 "data_offset": 2048, 00:36:14.198 "data_size": 63488 00:36:14.198 } 00:36:14.198 ] 00:36:14.198 }' 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.198 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.763 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.764 [2024-12-09 05:27:01.614808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.764 "name": "Existed_Raid", 00:36:14.764 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:14.764 "strip_size_kb": 64, 00:36:14.764 "state": "configuring", 00:36:14.764 "raid_level": "raid0", 00:36:14.764 "superblock": true, 00:36:14.764 "num_base_bdevs": 3, 00:36:14.764 "num_base_bdevs_discovered": 1, 00:36:14.764 "num_base_bdevs_operational": 3, 00:36:14.764 "base_bdevs_list": [ 00:36:14.764 { 00:36:14.764 "name": "BaseBdev1", 00:36:14.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.764 "is_configured": false, 00:36:14.764 "data_offset": 0, 00:36:14.764 "data_size": 0 00:36:14.764 }, 00:36:14.764 { 00:36:14.764 "name": null, 00:36:14.764 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:14.764 "is_configured": false, 00:36:14.764 "data_offset": 0, 00:36:14.764 "data_size": 63488 00:36:14.764 }, 00:36:14.764 { 00:36:14.764 "name": "BaseBdev3", 00:36:14.764 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:14.764 "is_configured": true, 00:36:14.764 "data_offset": 2048, 00:36:14.764 "data_size": 63488 00:36:14.764 } 00:36:14.764 ] 00:36:14.764 }' 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.764 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.330 [2024-12-09 05:27:02.257321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:15.330 BaseBdev1 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.330 [ 00:36:15.330 { 00:36:15.330 "name": "BaseBdev1", 00:36:15.330 "aliases": [ 00:36:15.330 "7a95ae35-9766-41ae-abf8-07dc1d91cb1d" 00:36:15.330 ], 00:36:15.330 "product_name": "Malloc disk", 00:36:15.330 "block_size": 512, 00:36:15.330 "num_blocks": 65536, 00:36:15.330 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:15.330 "assigned_rate_limits": { 00:36:15.330 "rw_ios_per_sec": 0, 00:36:15.330 "rw_mbytes_per_sec": 0, 00:36:15.330 "r_mbytes_per_sec": 0, 00:36:15.330 "w_mbytes_per_sec": 0 00:36:15.330 }, 00:36:15.330 "claimed": true, 00:36:15.330 "claim_type": "exclusive_write", 00:36:15.330 "zoned": false, 00:36:15.330 "supported_io_types": { 00:36:15.330 "read": true, 00:36:15.330 "write": true, 00:36:15.330 "unmap": true, 00:36:15.330 "flush": true, 00:36:15.330 "reset": true, 00:36:15.330 "nvme_admin": false, 00:36:15.330 "nvme_io": false, 00:36:15.330 "nvme_io_md": false, 00:36:15.330 "write_zeroes": true, 00:36:15.330 "zcopy": true, 00:36:15.330 "get_zone_info": false, 00:36:15.330 "zone_management": false, 00:36:15.330 "zone_append": false, 00:36:15.330 "compare": false, 00:36:15.330 "compare_and_write": false, 00:36:15.330 "abort": true, 00:36:15.330 "seek_hole": false, 00:36:15.330 "seek_data": false, 00:36:15.330 "copy": true, 00:36:15.330 "nvme_iov_md": false 00:36:15.330 }, 00:36:15.330 "memory_domains": [ 00:36:15.330 { 00:36:15.330 "dma_device_id": "system", 00:36:15.330 "dma_device_type": 1 00:36:15.330 }, 00:36:15.330 { 00:36:15.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.330 "dma_device_type": 2 00:36:15.330 } 00:36:15.330 ], 00:36:15.330 "driver_specific": {} 00:36:15.330 } 00:36:15.330 ] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:15.330 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.331 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.589 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.589 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.589 "name": "Existed_Raid", 00:36:15.589 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:15.589 "strip_size_kb": 64, 00:36:15.589 "state": "configuring", 00:36:15.589 "raid_level": "raid0", 00:36:15.589 "superblock": true, 00:36:15.589 "num_base_bdevs": 3, 00:36:15.589 "num_base_bdevs_discovered": 2, 00:36:15.589 "num_base_bdevs_operational": 3, 00:36:15.589 "base_bdevs_list": [ 00:36:15.589 { 00:36:15.589 "name": "BaseBdev1", 00:36:15.589 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:15.589 "is_configured": true, 00:36:15.589 "data_offset": 2048, 00:36:15.589 "data_size": 63488 00:36:15.589 }, 00:36:15.589 { 00:36:15.589 "name": null, 00:36:15.589 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:15.589 "is_configured": false, 00:36:15.589 "data_offset": 0, 00:36:15.589 "data_size": 63488 00:36:15.589 }, 00:36:15.589 { 00:36:15.589 "name": "BaseBdev3", 00:36:15.589 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:15.589 "is_configured": true, 00:36:15.589 "data_offset": 2048, 00:36:15.589 "data_size": 63488 00:36:15.589 } 00:36:15.589 ] 00:36:15.589 }' 00:36:15.589 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.589 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.847 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.847 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:15.847 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.847 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.106 [2024-12-09 05:27:02.873527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.106 "name": "Existed_Raid", 00:36:16.106 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:16.106 "strip_size_kb": 64, 00:36:16.106 "state": "configuring", 00:36:16.106 "raid_level": "raid0", 00:36:16.106 "superblock": true, 00:36:16.106 "num_base_bdevs": 3, 00:36:16.106 "num_base_bdevs_discovered": 1, 00:36:16.106 "num_base_bdevs_operational": 3, 00:36:16.106 "base_bdevs_list": [ 00:36:16.106 { 00:36:16.106 "name": "BaseBdev1", 00:36:16.106 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:16.106 "is_configured": true, 00:36:16.106 "data_offset": 2048, 00:36:16.106 "data_size": 63488 00:36:16.106 }, 00:36:16.106 { 00:36:16.106 "name": null, 00:36:16.106 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:16.106 "is_configured": false, 00:36:16.106 "data_offset": 0, 00:36:16.106 "data_size": 63488 00:36:16.106 }, 00:36:16.106 { 00:36:16.106 "name": null, 00:36:16.106 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:16.106 "is_configured": false, 00:36:16.106 "data_offset": 0, 00:36:16.106 "data_size": 63488 00:36:16.106 } 00:36:16.106 ] 00:36:16.106 }' 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.106 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.673 [2024-12-09 05:27:03.453862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.673 "name": "Existed_Raid", 00:36:16.673 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:16.673 "strip_size_kb": 64, 00:36:16.673 "state": "configuring", 00:36:16.673 "raid_level": "raid0", 00:36:16.673 "superblock": true, 00:36:16.673 "num_base_bdevs": 3, 00:36:16.673 "num_base_bdevs_discovered": 2, 00:36:16.673 "num_base_bdevs_operational": 3, 00:36:16.673 "base_bdevs_list": [ 00:36:16.673 { 00:36:16.673 "name": "BaseBdev1", 00:36:16.673 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:16.673 "is_configured": true, 00:36:16.673 "data_offset": 2048, 00:36:16.673 "data_size": 63488 00:36:16.673 }, 00:36:16.673 { 00:36:16.673 "name": null, 00:36:16.673 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:16.673 "is_configured": false, 00:36:16.673 "data_offset": 0, 00:36:16.673 "data_size": 63488 00:36:16.673 }, 00:36:16.673 { 00:36:16.673 "name": "BaseBdev3", 00:36:16.673 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:16.673 "is_configured": true, 00:36:16.673 "data_offset": 2048, 00:36:16.673 "data_size": 63488 00:36:16.673 } 00:36:16.673 ] 00:36:16.673 }' 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.673 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.240 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.240 05:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:17.240 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.240 05:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.240 [2024-12-09 05:27:04.050120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.240 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.241 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.241 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.241 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.241 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.241 "name": "Existed_Raid", 00:36:17.241 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:17.241 "strip_size_kb": 64, 00:36:17.241 "state": "configuring", 00:36:17.241 "raid_level": "raid0", 00:36:17.241 "superblock": true, 00:36:17.241 "num_base_bdevs": 3, 00:36:17.241 "num_base_bdevs_discovered": 1, 00:36:17.241 "num_base_bdevs_operational": 3, 00:36:17.241 "base_bdevs_list": [ 00:36:17.241 { 00:36:17.241 "name": null, 00:36:17.241 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:17.241 "is_configured": false, 00:36:17.241 "data_offset": 0, 00:36:17.241 "data_size": 63488 00:36:17.241 }, 00:36:17.241 { 00:36:17.241 "name": null, 00:36:17.241 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:17.241 "is_configured": false, 00:36:17.241 "data_offset": 0, 00:36:17.241 "data_size": 63488 00:36:17.241 }, 00:36:17.241 { 00:36:17.241 "name": "BaseBdev3", 00:36:17.241 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:17.241 "is_configured": true, 00:36:17.241 "data_offset": 2048, 00:36:17.241 "data_size": 63488 00:36:17.241 } 00:36:17.241 ] 00:36:17.241 }' 00:36:17.241 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.241 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.806 [2024-12-09 05:27:04.722563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:17.806 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.065 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.065 "name": "Existed_Raid", 00:36:18.065 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:18.065 "strip_size_kb": 64, 00:36:18.065 "state": "configuring", 00:36:18.065 "raid_level": "raid0", 00:36:18.065 "superblock": true, 00:36:18.065 "num_base_bdevs": 3, 00:36:18.065 "num_base_bdevs_discovered": 2, 00:36:18.065 "num_base_bdevs_operational": 3, 00:36:18.065 "base_bdevs_list": [ 00:36:18.065 { 00:36:18.065 "name": null, 00:36:18.065 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:18.065 "is_configured": false, 00:36:18.065 "data_offset": 0, 00:36:18.065 "data_size": 63488 00:36:18.065 }, 00:36:18.065 { 00:36:18.065 "name": "BaseBdev2", 00:36:18.065 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:18.065 "is_configured": true, 00:36:18.065 "data_offset": 2048, 00:36:18.065 "data_size": 63488 00:36:18.065 }, 00:36:18.065 { 00:36:18.065 "name": "BaseBdev3", 00:36:18.065 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:18.065 "is_configured": true, 00:36:18.065 "data_offset": 2048, 00:36:18.065 "data_size": 63488 00:36:18.065 } 00:36:18.065 ] 00:36:18.065 }' 00:36:18.065 05:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.065 05:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.322 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.322 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:18.322 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.322 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.322 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.322 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a95ae35-9766-41ae-abf8-07dc1d91cb1d 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.580 [2024-12-09 05:27:05.383005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:18.580 [2024-12-09 05:27:05.383326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:18.580 [2024-12-09 05:27:05.383350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:18.580 NewBaseBdev 00:36:18.580 [2024-12-09 05:27:05.383666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:18.580 [2024-12-09 05:27:05.383896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:18.580 [2024-12-09 05:27:05.383913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:18.580 [2024-12-09 05:27:05.384112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:18.580 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.581 [ 00:36:18.581 { 00:36:18.581 "name": "NewBaseBdev", 00:36:18.581 "aliases": [ 00:36:18.581 "7a95ae35-9766-41ae-abf8-07dc1d91cb1d" 00:36:18.581 ], 00:36:18.581 "product_name": "Malloc disk", 00:36:18.581 "block_size": 512, 00:36:18.581 "num_blocks": 65536, 00:36:18.581 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:18.581 "assigned_rate_limits": { 00:36:18.581 "rw_ios_per_sec": 0, 00:36:18.581 "rw_mbytes_per_sec": 0, 00:36:18.581 "r_mbytes_per_sec": 0, 00:36:18.581 "w_mbytes_per_sec": 0 00:36:18.581 }, 00:36:18.581 "claimed": true, 00:36:18.581 "claim_type": "exclusive_write", 00:36:18.581 "zoned": false, 00:36:18.581 "supported_io_types": { 00:36:18.581 "read": true, 00:36:18.581 "write": true, 00:36:18.581 "unmap": true, 00:36:18.581 "flush": true, 00:36:18.581 "reset": true, 00:36:18.581 "nvme_admin": false, 00:36:18.581 "nvme_io": false, 00:36:18.581 "nvme_io_md": false, 00:36:18.581 "write_zeroes": true, 00:36:18.581 "zcopy": true, 00:36:18.581 "get_zone_info": false, 00:36:18.581 "zone_management": false, 00:36:18.581 "zone_append": false, 00:36:18.581 "compare": false, 00:36:18.581 "compare_and_write": false, 00:36:18.581 "abort": true, 00:36:18.581 "seek_hole": false, 00:36:18.581 "seek_data": false, 00:36:18.581 "copy": true, 00:36:18.581 "nvme_iov_md": false 00:36:18.581 }, 00:36:18.581 "memory_domains": [ 00:36:18.581 { 00:36:18.581 "dma_device_id": "system", 00:36:18.581 "dma_device_type": 1 00:36:18.581 }, 00:36:18.581 { 00:36:18.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:18.581 "dma_device_type": 2 00:36:18.581 } 00:36:18.581 ], 00:36:18.581 "driver_specific": {} 00:36:18.581 } 00:36:18.581 ] 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.581 "name": "Existed_Raid", 00:36:18.581 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:18.581 "strip_size_kb": 64, 00:36:18.581 "state": "online", 00:36:18.581 "raid_level": "raid0", 00:36:18.581 "superblock": true, 00:36:18.581 "num_base_bdevs": 3, 00:36:18.581 "num_base_bdevs_discovered": 3, 00:36:18.581 "num_base_bdevs_operational": 3, 00:36:18.581 "base_bdevs_list": [ 00:36:18.581 { 00:36:18.581 "name": "NewBaseBdev", 00:36:18.581 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:18.581 "is_configured": true, 00:36:18.581 "data_offset": 2048, 00:36:18.581 "data_size": 63488 00:36:18.581 }, 00:36:18.581 { 00:36:18.581 "name": "BaseBdev2", 00:36:18.581 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:18.581 "is_configured": true, 00:36:18.581 "data_offset": 2048, 00:36:18.581 "data_size": 63488 00:36:18.581 }, 00:36:18.581 { 00:36:18.581 "name": "BaseBdev3", 00:36:18.581 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:18.581 "is_configured": true, 00:36:18.581 "data_offset": 2048, 00:36:18.581 "data_size": 63488 00:36:18.581 } 00:36:18.581 ] 00:36:18.581 }' 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.581 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:19.146 [2024-12-09 05:27:05.947666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.146 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:19.146 "name": "Existed_Raid", 00:36:19.146 "aliases": [ 00:36:19.146 "2c9f6acc-ed66-449b-bfca-d998efaa866e" 00:36:19.146 ], 00:36:19.146 "product_name": "Raid Volume", 00:36:19.146 "block_size": 512, 00:36:19.146 "num_blocks": 190464, 00:36:19.146 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:19.147 "assigned_rate_limits": { 00:36:19.147 "rw_ios_per_sec": 0, 00:36:19.147 "rw_mbytes_per_sec": 0, 00:36:19.147 "r_mbytes_per_sec": 0, 00:36:19.147 "w_mbytes_per_sec": 0 00:36:19.147 }, 00:36:19.147 "claimed": false, 00:36:19.147 "zoned": false, 00:36:19.147 "supported_io_types": { 00:36:19.147 "read": true, 00:36:19.147 "write": true, 00:36:19.147 "unmap": true, 00:36:19.147 "flush": true, 00:36:19.147 "reset": true, 00:36:19.147 "nvme_admin": false, 00:36:19.147 "nvme_io": false, 00:36:19.147 "nvme_io_md": false, 00:36:19.147 "write_zeroes": true, 00:36:19.147 "zcopy": false, 00:36:19.147 "get_zone_info": false, 00:36:19.147 "zone_management": false, 00:36:19.147 "zone_append": false, 00:36:19.147 "compare": false, 00:36:19.147 "compare_and_write": false, 00:36:19.147 "abort": false, 00:36:19.147 "seek_hole": false, 00:36:19.147 "seek_data": false, 00:36:19.147 "copy": false, 00:36:19.147 "nvme_iov_md": false 00:36:19.147 }, 00:36:19.147 "memory_domains": [ 00:36:19.147 { 00:36:19.147 "dma_device_id": "system", 00:36:19.147 "dma_device_type": 1 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:19.147 "dma_device_type": 2 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "dma_device_id": "system", 00:36:19.147 "dma_device_type": 1 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:19.147 "dma_device_type": 2 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "dma_device_id": "system", 00:36:19.147 "dma_device_type": 1 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:19.147 "dma_device_type": 2 00:36:19.147 } 00:36:19.147 ], 00:36:19.147 "driver_specific": { 00:36:19.147 "raid": { 00:36:19.147 "uuid": "2c9f6acc-ed66-449b-bfca-d998efaa866e", 00:36:19.147 "strip_size_kb": 64, 00:36:19.147 "state": "online", 00:36:19.147 "raid_level": "raid0", 00:36:19.147 "superblock": true, 00:36:19.147 "num_base_bdevs": 3, 00:36:19.147 "num_base_bdevs_discovered": 3, 00:36:19.147 "num_base_bdevs_operational": 3, 00:36:19.147 "base_bdevs_list": [ 00:36:19.147 { 00:36:19.147 "name": "NewBaseBdev", 00:36:19.147 "uuid": "7a95ae35-9766-41ae-abf8-07dc1d91cb1d", 00:36:19.147 "is_configured": true, 00:36:19.147 "data_offset": 2048, 00:36:19.147 "data_size": 63488 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "name": "BaseBdev2", 00:36:19.147 "uuid": "39d5f85d-4e7c-42a2-aa05-4a66779016c5", 00:36:19.147 "is_configured": true, 00:36:19.147 "data_offset": 2048, 00:36:19.147 "data_size": 63488 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "name": "BaseBdev3", 00:36:19.147 "uuid": "ee82e3b5-f938-4067-92a6-396f7dc46b5f", 00:36:19.147 "is_configured": true, 00:36:19.147 "data_offset": 2048, 00:36:19.147 "data_size": 63488 00:36:19.147 } 00:36:19.147 ] 00:36:19.147 } 00:36:19.147 } 00:36:19.147 }' 00:36:19.147 05:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:19.147 BaseBdev2 00:36:19.147 BaseBdev3' 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.147 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:19.406 [2024-12-09 05:27:06.271337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:19.406 [2024-12-09 05:27:06.271369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:19.406 [2024-12-09 05:27:06.271456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:19.406 [2024-12-09 05:27:06.271542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:19.406 [2024-12-09 05:27:06.271561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64485 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64485 ']' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64485 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64485 00:36:19.406 killing process with pid 64485 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64485' 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64485 00:36:19.406 [2024-12-09 05:27:06.311012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:19.406 05:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64485 00:36:19.665 [2024-12-09 05:27:06.556475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:21.038 05:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:36:21.038 00:36:21.038 real 0m11.986s 00:36:21.038 user 0m19.733s 00:36:21.038 sys 0m1.774s 00:36:21.038 05:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.038 ************************************ 00:36:21.038 END TEST raid_state_function_test_sb 00:36:21.038 ************************************ 00:36:21.038 05:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:21.038 05:27:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:36:21.038 05:27:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:21.038 05:27:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.038 05:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:21.038 ************************************ 00:36:21.038 START TEST raid_superblock_test 00:36:21.038 ************************************ 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:36:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65122 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65122 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65122 ']' 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.038 05:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.038 [2024-12-09 05:27:07.868297] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:21.038 [2024-12-09 05:27:07.868497] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65122 ] 00:36:21.296 [2024-12-09 05:27:08.048583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.296 [2024-12-09 05:27:08.187402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.553 [2024-12-09 05:27:08.390807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:21.553 [2024-12-09 05:27:08.391159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 malloc1 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 [2024-12-09 05:27:08.844921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:22.169 [2024-12-09 05:27:08.845178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.169 [2024-12-09 05:27:08.845352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:22.169 [2024-12-09 05:27:08.845502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.169 [2024-12-09 05:27:08.848445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.169 [2024-12-09 05:27:08.848663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:22.169 pt1 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 malloc2 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 [2024-12-09 05:27:08.902803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:22.169 [2024-12-09 05:27:08.902898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.169 [2024-12-09 05:27:08.902936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:22.169 [2024-12-09 05:27:08.902950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.169 [2024-12-09 05:27:08.905625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.169 [2024-12-09 05:27:08.905668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:22.169 pt2 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 malloc3 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 [2024-12-09 05:27:08.969322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:22.169 [2024-12-09 05:27:08.969395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.169 [2024-12-09 05:27:08.969442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:22.169 [2024-12-09 05:27:08.969456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.169 [2024-12-09 05:27:08.972286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.169 [2024-12-09 05:27:08.972342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:22.169 pt3 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.169 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 [2024-12-09 05:27:08.981368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:22.169 [2024-12-09 05:27:08.983850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:22.169 [2024-12-09 05:27:08.983941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:22.169 [2024-12-09 05:27:08.984149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:22.169 [2024-12-09 05:27:08.984172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:22.169 [2024-12-09 05:27:08.984429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:22.169 [2024-12-09 05:27:08.984613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:22.169 [2024-12-09 05:27:08.984628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:22.170 [2024-12-09 05:27:08.984816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.170 05:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.170 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.170 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:22.170 "name": "raid_bdev1", 00:36:22.170 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:22.170 "strip_size_kb": 64, 00:36:22.170 "state": "online", 00:36:22.170 "raid_level": "raid0", 00:36:22.170 "superblock": true, 00:36:22.170 "num_base_bdevs": 3, 00:36:22.170 "num_base_bdevs_discovered": 3, 00:36:22.170 "num_base_bdevs_operational": 3, 00:36:22.170 "base_bdevs_list": [ 00:36:22.170 { 00:36:22.170 "name": "pt1", 00:36:22.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:22.170 "is_configured": true, 00:36:22.170 "data_offset": 2048, 00:36:22.170 "data_size": 63488 00:36:22.170 }, 00:36:22.170 { 00:36:22.170 "name": "pt2", 00:36:22.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:22.170 "is_configured": true, 00:36:22.170 "data_offset": 2048, 00:36:22.170 "data_size": 63488 00:36:22.170 }, 00:36:22.170 { 00:36:22.170 "name": "pt3", 00:36:22.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:22.170 "is_configured": true, 00:36:22.170 "data_offset": 2048, 00:36:22.170 "data_size": 63488 00:36:22.170 } 00:36:22.170 ] 00:36:22.170 }' 00:36:22.170 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:22.170 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.735 [2024-12-09 05:27:09.489984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:22.735 "name": "raid_bdev1", 00:36:22.735 "aliases": [ 00:36:22.735 "c52632ce-17c0-469e-bf30-e7e04727d04c" 00:36:22.735 ], 00:36:22.735 "product_name": "Raid Volume", 00:36:22.735 "block_size": 512, 00:36:22.735 "num_blocks": 190464, 00:36:22.735 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:22.735 "assigned_rate_limits": { 00:36:22.735 "rw_ios_per_sec": 0, 00:36:22.735 "rw_mbytes_per_sec": 0, 00:36:22.735 "r_mbytes_per_sec": 0, 00:36:22.735 "w_mbytes_per_sec": 0 00:36:22.735 }, 00:36:22.735 "claimed": false, 00:36:22.735 "zoned": false, 00:36:22.735 "supported_io_types": { 00:36:22.735 "read": true, 00:36:22.735 "write": true, 00:36:22.735 "unmap": true, 00:36:22.735 "flush": true, 00:36:22.735 "reset": true, 00:36:22.735 "nvme_admin": false, 00:36:22.735 "nvme_io": false, 00:36:22.735 "nvme_io_md": false, 00:36:22.735 "write_zeroes": true, 00:36:22.735 "zcopy": false, 00:36:22.735 "get_zone_info": false, 00:36:22.735 "zone_management": false, 00:36:22.735 "zone_append": false, 00:36:22.735 "compare": false, 00:36:22.735 "compare_and_write": false, 00:36:22.735 "abort": false, 00:36:22.735 "seek_hole": false, 00:36:22.735 "seek_data": false, 00:36:22.735 "copy": false, 00:36:22.735 "nvme_iov_md": false 00:36:22.735 }, 00:36:22.735 "memory_domains": [ 00:36:22.735 { 00:36:22.735 "dma_device_id": "system", 00:36:22.735 "dma_device_type": 1 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.735 "dma_device_type": 2 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "dma_device_id": "system", 00:36:22.735 "dma_device_type": 1 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.735 "dma_device_type": 2 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "dma_device_id": "system", 00:36:22.735 "dma_device_type": 1 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.735 "dma_device_type": 2 00:36:22.735 } 00:36:22.735 ], 00:36:22.735 "driver_specific": { 00:36:22.735 "raid": { 00:36:22.735 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:22.735 "strip_size_kb": 64, 00:36:22.735 "state": "online", 00:36:22.735 "raid_level": "raid0", 00:36:22.735 "superblock": true, 00:36:22.735 "num_base_bdevs": 3, 00:36:22.735 "num_base_bdevs_discovered": 3, 00:36:22.735 "num_base_bdevs_operational": 3, 00:36:22.735 "base_bdevs_list": [ 00:36:22.735 { 00:36:22.735 "name": "pt1", 00:36:22.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:22.735 "is_configured": true, 00:36:22.735 "data_offset": 2048, 00:36:22.735 "data_size": 63488 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "name": "pt2", 00:36:22.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:22.735 "is_configured": true, 00:36:22.735 "data_offset": 2048, 00:36:22.735 "data_size": 63488 00:36:22.735 }, 00:36:22.735 { 00:36:22.735 "name": "pt3", 00:36:22.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:22.735 "is_configured": true, 00:36:22.735 "data_offset": 2048, 00:36:22.735 "data_size": 63488 00:36:22.735 } 00:36:22.735 ] 00:36:22.735 } 00:36:22.735 } 00:36:22.735 }' 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:22.735 pt2 00:36:22.735 pt3' 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.735 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.736 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:22.994 [2024-12-09 05:27:09.806001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c52632ce-17c0-469e-bf30-e7e04727d04c 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c52632ce-17c0-469e-bf30-e7e04727d04c ']' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 [2024-12-09 05:27:09.857583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:22.994 [2024-12-09 05:27:09.857750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:22.994 [2024-12-09 05:27:09.858044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:22.994 [2024-12-09 05:27:09.858270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:22.994 [2024-12-09 05:27:09.858409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.994 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.252 05:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.252 [2024-12-09 05:27:10.005677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:23.252 [2024-12-09 05:27:10.008417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:23.252 [2024-12-09 05:27:10.008633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:23.252 [2024-12-09 05:27:10.008721] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:23.252 [2024-12-09 05:27:10.008839] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:23.252 [2024-12-09 05:27:10.008876] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:36:23.252 [2024-12-09 05:27:10.008904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:23.252 [2024-12-09 05:27:10.008920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:36:23.252 request: 00:36:23.252 { 00:36:23.252 "name": "raid_bdev1", 00:36:23.252 "raid_level": "raid0", 00:36:23.252 "base_bdevs": [ 00:36:23.252 "malloc1", 00:36:23.252 "malloc2", 00:36:23.252 "malloc3" 00:36:23.252 ], 00:36:23.252 "strip_size_kb": 64, 00:36:23.252 "superblock": false, 00:36:23.252 "method": "bdev_raid_create", 00:36:23.252 "req_id": 1 00:36:23.252 } 00:36:23.252 Got JSON-RPC error response 00:36:23.252 response: 00:36:23.252 { 00:36:23.252 "code": -17, 00:36:23.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:23.252 } 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.252 [2024-12-09 05:27:10.077758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:23.252 [2024-12-09 05:27:10.077991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:23.252 [2024-12-09 05:27:10.078170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:23.252 [2024-12-09 05:27:10.078321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:23.252 [2024-12-09 05:27:10.081542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:23.252 [2024-12-09 05:27:10.081731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:23.252 [2024-12-09 05:27:10.082029] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:23.252 [2024-12-09 05:27:10.082221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:23.252 pt1 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.252 "name": "raid_bdev1", 00:36:23.252 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:23.252 "strip_size_kb": 64, 00:36:23.252 "state": "configuring", 00:36:23.252 "raid_level": "raid0", 00:36:23.252 "superblock": true, 00:36:23.252 "num_base_bdevs": 3, 00:36:23.252 "num_base_bdevs_discovered": 1, 00:36:23.252 "num_base_bdevs_operational": 3, 00:36:23.252 "base_bdevs_list": [ 00:36:23.252 { 00:36:23.252 "name": "pt1", 00:36:23.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:23.252 "is_configured": true, 00:36:23.252 "data_offset": 2048, 00:36:23.252 "data_size": 63488 00:36:23.252 }, 00:36:23.252 { 00:36:23.252 "name": null, 00:36:23.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.252 "is_configured": false, 00:36:23.252 "data_offset": 2048, 00:36:23.252 "data_size": 63488 00:36:23.252 }, 00:36:23.252 { 00:36:23.252 "name": null, 00:36:23.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:23.252 "is_configured": false, 00:36:23.252 "data_offset": 2048, 00:36:23.252 "data_size": 63488 00:36:23.252 } 00:36:23.252 ] 00:36:23.252 }' 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.252 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.820 [2024-12-09 05:27:10.602307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:23.820 [2024-12-09 05:27:10.602411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:23.820 [2024-12-09 05:27:10.602454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:23.820 [2024-12-09 05:27:10.602470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:23.820 [2024-12-09 05:27:10.603110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:23.820 [2024-12-09 05:27:10.603147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:23.820 [2024-12-09 05:27:10.603294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:23.820 [2024-12-09 05:27:10.603337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:23.820 pt2 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.820 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.820 [2024-12-09 05:27:10.610311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.821 "name": "raid_bdev1", 00:36:23.821 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:23.821 "strip_size_kb": 64, 00:36:23.821 "state": "configuring", 00:36:23.821 "raid_level": "raid0", 00:36:23.821 "superblock": true, 00:36:23.821 "num_base_bdevs": 3, 00:36:23.821 "num_base_bdevs_discovered": 1, 00:36:23.821 "num_base_bdevs_operational": 3, 00:36:23.821 "base_bdevs_list": [ 00:36:23.821 { 00:36:23.821 "name": "pt1", 00:36:23.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:23.821 "is_configured": true, 00:36:23.821 "data_offset": 2048, 00:36:23.821 "data_size": 63488 00:36:23.821 }, 00:36:23.821 { 00:36:23.821 "name": null, 00:36:23.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.821 "is_configured": false, 00:36:23.821 "data_offset": 0, 00:36:23.821 "data_size": 63488 00:36:23.821 }, 00:36:23.821 { 00:36:23.821 "name": null, 00:36:23.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:23.821 "is_configured": false, 00:36:23.821 "data_offset": 2048, 00:36:23.821 "data_size": 63488 00:36:23.821 } 00:36:23.821 ] 00:36:23.821 }' 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.821 05:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.386 [2024-12-09 05:27:11.138399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:24.386 [2024-12-09 05:27:11.138643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:24.386 [2024-12-09 05:27:11.138678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:36:24.386 [2024-12-09 05:27:11.138695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:24.386 [2024-12-09 05:27:11.139335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:24.386 [2024-12-09 05:27:11.139364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:24.386 [2024-12-09 05:27:11.139448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:24.386 [2024-12-09 05:27:11.139481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:24.386 pt2 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.386 [2024-12-09 05:27:11.146432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:24.386 [2024-12-09 05:27:11.146502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:24.386 [2024-12-09 05:27:11.146522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:24.386 [2024-12-09 05:27:11.146536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:24.386 [2024-12-09 05:27:11.146994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:24.386 [2024-12-09 05:27:11.147028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:24.386 [2024-12-09 05:27:11.147147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:24.386 [2024-12-09 05:27:11.147194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:24.386 [2024-12-09 05:27:11.147350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:24.386 [2024-12-09 05:27:11.147378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:24.386 [2024-12-09 05:27:11.147708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:24.386 [2024-12-09 05:27:11.148003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:24.386 [2024-12-09 05:27:11.148019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:36:24.386 [2024-12-09 05:27:11.148262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:24.386 pt3 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.386 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:24.387 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.387 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.387 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.387 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:24.387 "name": "raid_bdev1", 00:36:24.387 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:24.387 "strip_size_kb": 64, 00:36:24.387 "state": "online", 00:36:24.387 "raid_level": "raid0", 00:36:24.387 "superblock": true, 00:36:24.387 "num_base_bdevs": 3, 00:36:24.387 "num_base_bdevs_discovered": 3, 00:36:24.387 "num_base_bdevs_operational": 3, 00:36:24.387 "base_bdevs_list": [ 00:36:24.387 { 00:36:24.387 "name": "pt1", 00:36:24.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:24.387 "is_configured": true, 00:36:24.387 "data_offset": 2048, 00:36:24.387 "data_size": 63488 00:36:24.387 }, 00:36:24.387 { 00:36:24.387 "name": "pt2", 00:36:24.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:24.387 "is_configured": true, 00:36:24.387 "data_offset": 2048, 00:36:24.387 "data_size": 63488 00:36:24.387 }, 00:36:24.387 { 00:36:24.387 "name": "pt3", 00:36:24.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:24.387 "is_configured": true, 00:36:24.387 "data_offset": 2048, 00:36:24.387 "data_size": 63488 00:36:24.387 } 00:36:24.387 ] 00:36:24.387 }' 00:36:24.387 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:24.387 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.953 [2024-12-09 05:27:11.687172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:24.953 "name": "raid_bdev1", 00:36:24.953 "aliases": [ 00:36:24.953 "c52632ce-17c0-469e-bf30-e7e04727d04c" 00:36:24.953 ], 00:36:24.953 "product_name": "Raid Volume", 00:36:24.953 "block_size": 512, 00:36:24.953 "num_blocks": 190464, 00:36:24.953 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:24.953 "assigned_rate_limits": { 00:36:24.953 "rw_ios_per_sec": 0, 00:36:24.953 "rw_mbytes_per_sec": 0, 00:36:24.953 "r_mbytes_per_sec": 0, 00:36:24.953 "w_mbytes_per_sec": 0 00:36:24.953 }, 00:36:24.953 "claimed": false, 00:36:24.953 "zoned": false, 00:36:24.953 "supported_io_types": { 00:36:24.953 "read": true, 00:36:24.953 "write": true, 00:36:24.953 "unmap": true, 00:36:24.953 "flush": true, 00:36:24.953 "reset": true, 00:36:24.953 "nvme_admin": false, 00:36:24.953 "nvme_io": false, 00:36:24.953 "nvme_io_md": false, 00:36:24.953 "write_zeroes": true, 00:36:24.953 "zcopy": false, 00:36:24.953 "get_zone_info": false, 00:36:24.953 "zone_management": false, 00:36:24.953 "zone_append": false, 00:36:24.953 "compare": false, 00:36:24.953 "compare_and_write": false, 00:36:24.953 "abort": false, 00:36:24.953 "seek_hole": false, 00:36:24.953 "seek_data": false, 00:36:24.953 "copy": false, 00:36:24.953 "nvme_iov_md": false 00:36:24.953 }, 00:36:24.953 "memory_domains": [ 00:36:24.953 { 00:36:24.953 "dma_device_id": "system", 00:36:24.953 "dma_device_type": 1 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:24.953 "dma_device_type": 2 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "dma_device_id": "system", 00:36:24.953 "dma_device_type": 1 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:24.953 "dma_device_type": 2 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "dma_device_id": "system", 00:36:24.953 "dma_device_type": 1 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:24.953 "dma_device_type": 2 00:36:24.953 } 00:36:24.953 ], 00:36:24.953 "driver_specific": { 00:36:24.953 "raid": { 00:36:24.953 "uuid": "c52632ce-17c0-469e-bf30-e7e04727d04c", 00:36:24.953 "strip_size_kb": 64, 00:36:24.953 "state": "online", 00:36:24.953 "raid_level": "raid0", 00:36:24.953 "superblock": true, 00:36:24.953 "num_base_bdevs": 3, 00:36:24.953 "num_base_bdevs_discovered": 3, 00:36:24.953 "num_base_bdevs_operational": 3, 00:36:24.953 "base_bdevs_list": [ 00:36:24.953 { 00:36:24.953 "name": "pt1", 00:36:24.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:24.953 "is_configured": true, 00:36:24.953 "data_offset": 2048, 00:36:24.953 "data_size": 63488 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "name": "pt2", 00:36:24.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:24.953 "is_configured": true, 00:36:24.953 "data_offset": 2048, 00:36:24.953 "data_size": 63488 00:36:24.953 }, 00:36:24.953 { 00:36:24.953 "name": "pt3", 00:36:24.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:24.953 "is_configured": true, 00:36:24.953 "data_offset": 2048, 00:36:24.953 "data_size": 63488 00:36:24.953 } 00:36:24.953 ] 00:36:24.953 } 00:36:24.953 } 00:36:24.953 }' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:24.953 pt2 00:36:24.953 pt3' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:24.953 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:24.954 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:24.954 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.954 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.954 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.212 05:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:25.212 [2024-12-09 05:27:12.003121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c52632ce-17c0-469e-bf30-e7e04727d04c '!=' c52632ce-17c0-469e-bf30-e7e04727d04c ']' 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65122 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65122 ']' 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65122 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65122 00:36:25.212 killing process with pid 65122 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65122' 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65122 00:36:25.212 [2024-12-09 05:27:12.084184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:25.212 05:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65122 00:36:25.212 [2024-12-09 05:27:12.084288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:25.212 [2024-12-09 05:27:12.084363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:25.212 [2024-12-09 05:27:12.084381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:25.469 [2024-12-09 05:27:12.337904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:26.845 05:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:26.845 00:36:26.845 real 0m5.721s 00:36:26.845 user 0m8.513s 00:36:26.845 sys 0m0.885s 00:36:26.845 05:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.845 ************************************ 00:36:26.845 END TEST raid_superblock_test 00:36:26.845 ************************************ 00:36:26.845 05:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.845 05:27:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:36:26.845 05:27:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:26.845 05:27:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.845 05:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:26.845 ************************************ 00:36:26.845 START TEST raid_read_error_test 00:36:26.845 ************************************ 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mCINbjTMVI 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65387 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65387 00:36:26.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65387 ']' 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.845 05:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.845 [2024-12-09 05:27:13.653758] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:26.845 [2024-12-09 05:27:13.655092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65387 ] 00:36:27.104 [2024-12-09 05:27:13.850646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.104 [2024-12-09 05:27:13.985975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.363 [2024-12-09 05:27:14.189608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:27.363 [2024-12-09 05:27:14.190025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:27.621 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.621 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:27.621 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:27.621 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:27.621 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.621 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 BaseBdev1_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 true 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 [2024-12-09 05:27:14.646160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:27.879 [2024-12-09 05:27:14.646262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.879 [2024-12-09 05:27:14.646323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:27.879 [2024-12-09 05:27:14.646339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.879 [2024-12-09 05:27:14.649098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.879 [2024-12-09 05:27:14.649176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:27.879 BaseBdev1 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 BaseBdev2_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 true 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 [2024-12-09 05:27:14.706524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:27.879 [2024-12-09 05:27:14.706599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.879 [2024-12-09 05:27:14.706622] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:27.879 [2024-12-09 05:27:14.706637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.879 [2024-12-09 05:27:14.709509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.879 [2024-12-09 05:27:14.709570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:27.879 BaseBdev2 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 BaseBdev3_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.879 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 true 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.880 [2024-12-09 05:27:14.777425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:27.880 [2024-12-09 05:27:14.777518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.880 [2024-12-09 05:27:14.777544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:27.880 [2024-12-09 05:27:14.777560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.880 [2024-12-09 05:27:14.780535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.880 [2024-12-09 05:27:14.780597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:27.880 BaseBdev3 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.880 [2024-12-09 05:27:14.785593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:27.880 [2024-12-09 05:27:14.788398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:27.880 [2024-12-09 05:27:14.788500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:27.880 [2024-12-09 05:27:14.788755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:27.880 [2024-12-09 05:27:14.788831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:27.880 [2024-12-09 05:27:14.789166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:36:27.880 [2024-12-09 05:27:14.789401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:27.880 [2024-12-09 05:27:14.789433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:36:27.880 [2024-12-09 05:27:14.789676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.880 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.138 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:28.138 "name": "raid_bdev1", 00:36:28.138 "uuid": "7879d224-fa69-436d-8e1e-e1b39bc3669d", 00:36:28.138 "strip_size_kb": 64, 00:36:28.138 "state": "online", 00:36:28.138 "raid_level": "raid0", 00:36:28.138 "superblock": true, 00:36:28.138 "num_base_bdevs": 3, 00:36:28.138 "num_base_bdevs_discovered": 3, 00:36:28.138 "num_base_bdevs_operational": 3, 00:36:28.138 "base_bdevs_list": [ 00:36:28.138 { 00:36:28.138 "name": "BaseBdev1", 00:36:28.138 "uuid": "b046aaff-89f9-54b4-9b42-6821b8c76153", 00:36:28.138 "is_configured": true, 00:36:28.138 "data_offset": 2048, 00:36:28.138 "data_size": 63488 00:36:28.138 }, 00:36:28.138 { 00:36:28.138 "name": "BaseBdev2", 00:36:28.138 "uuid": "1a47cc44-60a9-59ff-8554-6a5fec51039e", 00:36:28.138 "is_configured": true, 00:36:28.138 "data_offset": 2048, 00:36:28.138 "data_size": 63488 00:36:28.138 }, 00:36:28.138 { 00:36:28.138 "name": "BaseBdev3", 00:36:28.138 "uuid": "02bf66c8-7e23-5f9d-bcde-4dce798e3432", 00:36:28.138 "is_configured": true, 00:36:28.138 "data_offset": 2048, 00:36:28.138 "data_size": 63488 00:36:28.138 } 00:36:28.138 ] 00:36:28.138 }' 00:36:28.138 05:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:28.138 05:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:28.395 05:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:28.395 05:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:28.654 [2024-12-09 05:27:15.407402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:29.589 "name": "raid_bdev1", 00:36:29.589 "uuid": "7879d224-fa69-436d-8e1e-e1b39bc3669d", 00:36:29.589 "strip_size_kb": 64, 00:36:29.589 "state": "online", 00:36:29.589 "raid_level": "raid0", 00:36:29.589 "superblock": true, 00:36:29.589 "num_base_bdevs": 3, 00:36:29.589 "num_base_bdevs_discovered": 3, 00:36:29.589 "num_base_bdevs_operational": 3, 00:36:29.589 "base_bdevs_list": [ 00:36:29.589 { 00:36:29.589 "name": "BaseBdev1", 00:36:29.589 "uuid": "b046aaff-89f9-54b4-9b42-6821b8c76153", 00:36:29.589 "is_configured": true, 00:36:29.589 "data_offset": 2048, 00:36:29.589 "data_size": 63488 00:36:29.589 }, 00:36:29.589 { 00:36:29.589 "name": "BaseBdev2", 00:36:29.589 "uuid": "1a47cc44-60a9-59ff-8554-6a5fec51039e", 00:36:29.589 "is_configured": true, 00:36:29.589 "data_offset": 2048, 00:36:29.589 "data_size": 63488 00:36:29.589 }, 00:36:29.589 { 00:36:29.589 "name": "BaseBdev3", 00:36:29.589 "uuid": "02bf66c8-7e23-5f9d-bcde-4dce798e3432", 00:36:29.589 "is_configured": true, 00:36:29.589 "data_offset": 2048, 00:36:29.589 "data_size": 63488 00:36:29.589 } 00:36:29.589 ] 00:36:29.589 }' 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:29.589 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:29.847 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:29.847 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.847 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:30.105 [2024-12-09 05:27:16.823087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:30.105 [2024-12-09 05:27:16.823127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:30.105 [2024-12-09 05:27:16.827023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:30.105 [2024-12-09 05:27:16.827305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:30.105 [2024-12-09 05:27:16.827517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:30.105 [2024-12-09 05:27:16.827695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:36:30.105 { 00:36:30.105 "results": [ 00:36:30.105 { 00:36:30.105 "job": "raid_bdev1", 00:36:30.105 "core_mask": "0x1", 00:36:30.105 "workload": "randrw", 00:36:30.105 "percentage": 50, 00:36:30.105 "status": "finished", 00:36:30.105 "queue_depth": 1, 00:36:30.105 "io_size": 131072, 00:36:30.105 "runtime": 1.41304, 00:36:30.105 "iops": 10686.180150597294, 00:36:30.105 "mibps": 1335.7725188246618, 00:36:30.105 "io_failed": 1, 00:36:30.105 "io_timeout": 0, 00:36:30.105 "avg_latency_us": 131.011145077689, 00:36:30.105 "min_latency_us": 36.305454545454545, 00:36:30.105 "max_latency_us": 1675.6363636363637 00:36:30.105 } 00:36:30.105 ], 00:36:30.105 "core_count": 1 00:36:30.105 } 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65387 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65387 ']' 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65387 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65387 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:30.105 killing process with pid 65387 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65387' 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65387 00:36:30.105 [2024-12-09 05:27:16.871249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:30.105 05:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65387 00:36:30.105 [2024-12-09 05:27:17.070142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mCINbjTMVI 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:36:31.503 00:36:31.503 real 0m4.718s 00:36:31.503 user 0m5.711s 00:36:31.503 sys 0m0.667s 00:36:31.503 ************************************ 00:36:31.503 END TEST raid_read_error_test 00:36:31.503 ************************************ 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.503 05:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.503 05:27:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:36:31.503 05:27:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:31.503 05:27:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.503 05:27:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:31.503 ************************************ 00:36:31.503 START TEST raid_write_error_test 00:36:31.503 ************************************ 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.H3OFUMNbp0 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65527 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65527 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65527 ']' 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.503 05:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.503 [2024-12-09 05:27:18.435775] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:31.503 [2024-12-09 05:27:18.435967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65527 ] 00:36:31.761 [2024-12-09 05:27:18.620795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.019 [2024-12-09 05:27:18.764258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.019 [2024-12-09 05:27:18.978752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:32.019 [2024-12-09 05:27:18.978823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.584 BaseBdev1_malloc 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.584 true 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.584 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.584 [2024-12-09 05:27:19.414285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:32.584 [2024-12-09 05:27:19.414366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:32.584 [2024-12-09 05:27:19.414395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:32.584 [2024-12-09 05:27:19.414412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:32.584 [2024-12-09 05:27:19.417308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:32.584 [2024-12-09 05:27:19.417370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:32.584 BaseBdev1 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 BaseBdev2_malloc 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 true 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 [2024-12-09 05:27:19.478593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:32.585 [2024-12-09 05:27:19.478666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:32.585 [2024-12-09 05:27:19.478690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:32.585 [2024-12-09 05:27:19.478705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:32.585 [2024-12-09 05:27:19.481474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:32.585 [2024-12-09 05:27:19.481535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:32.585 BaseBdev2 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 BaseBdev3_malloc 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 true 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 [2024-12-09 05:27:19.544806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:32.585 [2024-12-09 05:27:19.544888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:32.585 [2024-12-09 05:27:19.544914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:32.585 [2024-12-09 05:27:19.544930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:32.585 [2024-12-09 05:27:19.547687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:32.585 [2024-12-09 05:27:19.547748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:32.585 BaseBdev3 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.585 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.585 [2024-12-09 05:27:19.552931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:32.843 [2024-12-09 05:27:19.555599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:32.843 [2024-12-09 05:27:19.555728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:32.843 [2024-12-09 05:27:19.556055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:32.843 [2024-12-09 05:27:19.556092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:32.843 [2024-12-09 05:27:19.556508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:36:32.843 [2024-12-09 05:27:19.556932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:32.843 [2024-12-09 05:27:19.556977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:36:32.843 [2024-12-09 05:27:19.557235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:32.843 "name": "raid_bdev1", 00:36:32.843 "uuid": "a8172d7c-5616-463d-96b5-447da9db492e", 00:36:32.843 "strip_size_kb": 64, 00:36:32.843 "state": "online", 00:36:32.843 "raid_level": "raid0", 00:36:32.843 "superblock": true, 00:36:32.843 "num_base_bdevs": 3, 00:36:32.843 "num_base_bdevs_discovered": 3, 00:36:32.843 "num_base_bdevs_operational": 3, 00:36:32.843 "base_bdevs_list": [ 00:36:32.843 { 00:36:32.843 "name": "BaseBdev1", 00:36:32.843 "uuid": "e6d84f95-c177-5846-ad23-f103251bc509", 00:36:32.843 "is_configured": true, 00:36:32.843 "data_offset": 2048, 00:36:32.843 "data_size": 63488 00:36:32.843 }, 00:36:32.843 { 00:36:32.843 "name": "BaseBdev2", 00:36:32.843 "uuid": "1750f567-2db2-5698-97b6-c990de258a71", 00:36:32.843 "is_configured": true, 00:36:32.843 "data_offset": 2048, 00:36:32.843 "data_size": 63488 00:36:32.843 }, 00:36:32.843 { 00:36:32.843 "name": "BaseBdev3", 00:36:32.843 "uuid": "8763d223-a7a0-5c81-b68a-bf6f5f81bb6c", 00:36:32.843 "is_configured": true, 00:36:32.843 "data_offset": 2048, 00:36:32.843 "data_size": 63488 00:36:32.843 } 00:36:32.843 ] 00:36:32.843 }' 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:32.843 05:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:33.101 05:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:33.101 05:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:33.359 [2024-12-09 05:27:20.186908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:34.296 "name": "raid_bdev1", 00:36:34.296 "uuid": "a8172d7c-5616-463d-96b5-447da9db492e", 00:36:34.296 "strip_size_kb": 64, 00:36:34.296 "state": "online", 00:36:34.296 "raid_level": "raid0", 00:36:34.296 "superblock": true, 00:36:34.296 "num_base_bdevs": 3, 00:36:34.296 "num_base_bdevs_discovered": 3, 00:36:34.296 "num_base_bdevs_operational": 3, 00:36:34.296 "base_bdevs_list": [ 00:36:34.296 { 00:36:34.296 "name": "BaseBdev1", 00:36:34.296 "uuid": "e6d84f95-c177-5846-ad23-f103251bc509", 00:36:34.296 "is_configured": true, 00:36:34.296 "data_offset": 2048, 00:36:34.296 "data_size": 63488 00:36:34.296 }, 00:36:34.296 { 00:36:34.296 "name": "BaseBdev2", 00:36:34.296 "uuid": "1750f567-2db2-5698-97b6-c990de258a71", 00:36:34.296 "is_configured": true, 00:36:34.296 "data_offset": 2048, 00:36:34.296 "data_size": 63488 00:36:34.296 }, 00:36:34.296 { 00:36:34.296 "name": "BaseBdev3", 00:36:34.296 "uuid": "8763d223-a7a0-5c81-b68a-bf6f5f81bb6c", 00:36:34.296 "is_configured": true, 00:36:34.296 "data_offset": 2048, 00:36:34.296 "data_size": 63488 00:36:34.296 } 00:36:34.296 ] 00:36:34.296 }' 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:34.296 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:34.888 [2024-12-09 05:27:21.578965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:34.888 [2024-12-09 05:27:21.578999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:34.888 [2024-12-09 05:27:21.582264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:34.888 [2024-12-09 05:27:21.582567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:34.888 [2024-12-09 05:27:21.582640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:34.888 [2024-12-09 05:27:21.582656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:36:34.888 { 00:36:34.888 "results": [ 00:36:34.888 { 00:36:34.888 "job": "raid_bdev1", 00:36:34.888 "core_mask": "0x1", 00:36:34.888 "workload": "randrw", 00:36:34.888 "percentage": 50, 00:36:34.888 "status": "finished", 00:36:34.888 "queue_depth": 1, 00:36:34.888 "io_size": 131072, 00:36:34.888 "runtime": 1.389587, 00:36:34.888 "iops": 10481.531562975186, 00:36:34.888 "mibps": 1310.1914453718982, 00:36:34.888 "io_failed": 1, 00:36:34.888 "io_timeout": 0, 00:36:34.888 "avg_latency_us": 133.2981403767179, 00:36:34.888 "min_latency_us": 36.77090909090909, 00:36:34.888 "max_latency_us": 1697.9781818181818 00:36:34.888 } 00:36:34.888 ], 00:36:34.888 "core_count": 1 00:36:34.888 } 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65527 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65527 ']' 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65527 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65527 00:36:34.888 killing process with pid 65527 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65527' 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65527 00:36:34.888 [2024-12-09 05:27:21.620731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:34.888 05:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65527 00:36:34.888 [2024-12-09 05:27:21.795949] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.H3OFUMNbp0 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:36:36.266 00:36:36.266 real 0m4.640s 00:36:36.266 user 0m5.627s 00:36:36.266 sys 0m0.644s 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.266 ************************************ 00:36:36.266 END TEST raid_write_error_test 00:36:36.266 ************************************ 00:36:36.266 05:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.266 05:27:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:36:36.266 05:27:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:36:36.266 05:27:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:36.266 05:27:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.266 05:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:36.266 ************************************ 00:36:36.266 START TEST raid_state_function_test 00:36:36.266 ************************************ 00:36:36.266 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:36:36.266 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:36:36.266 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:36:36.266 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:36:36.266 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:36.266 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:36:36.267 Process raid pid: 65675 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65675 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65675' 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65675 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65675 ']' 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.267 05:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.267 [2024-12-09 05:27:23.124971] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:36.267 [2024-12-09 05:27:23.125164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.525 [2024-12-09 05:27:23.310884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.525 [2024-12-09 05:27:23.428655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.797 [2024-12-09 05:27:23.646395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:36.797 [2024-12-09 05:27:23.646709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.366 [2024-12-09 05:27:24.042610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:37.366 [2024-12-09 05:27:24.042863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:37.366 [2024-12-09 05:27:24.042895] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:37.366 [2024-12-09 05:27:24.042915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:37.366 [2024-12-09 05:27:24.042926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:37.366 [2024-12-09 05:27:24.042941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.366 "name": "Existed_Raid", 00:36:37.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.366 "strip_size_kb": 64, 00:36:37.366 "state": "configuring", 00:36:37.366 "raid_level": "concat", 00:36:37.366 "superblock": false, 00:36:37.366 "num_base_bdevs": 3, 00:36:37.366 "num_base_bdevs_discovered": 0, 00:36:37.366 "num_base_bdevs_operational": 3, 00:36:37.366 "base_bdevs_list": [ 00:36:37.366 { 00:36:37.366 "name": "BaseBdev1", 00:36:37.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.366 "is_configured": false, 00:36:37.366 "data_offset": 0, 00:36:37.366 "data_size": 0 00:36:37.366 }, 00:36:37.366 { 00:36:37.366 "name": "BaseBdev2", 00:36:37.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.366 "is_configured": false, 00:36:37.366 "data_offset": 0, 00:36:37.366 "data_size": 0 00:36:37.366 }, 00:36:37.366 { 00:36:37.366 "name": "BaseBdev3", 00:36:37.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.366 "is_configured": false, 00:36:37.366 "data_offset": 0, 00:36:37.366 "data_size": 0 00:36:37.366 } 00:36:37.366 ] 00:36:37.366 }' 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.366 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.626 [2024-12-09 05:27:24.550694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:37.626 [2024-12-09 05:27:24.550918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.626 [2024-12-09 05:27:24.562712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:37.626 [2024-12-09 05:27:24.563000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:37.626 [2024-12-09 05:27:24.563122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:37.626 [2024-12-09 05:27:24.563206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:37.626 [2024-12-09 05:27:24.563323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:37.626 [2024-12-09 05:27:24.563380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.626 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.886 [2024-12-09 05:27:24.609516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:37.886 BaseBdev1 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.886 [ 00:36:37.886 { 00:36:37.886 "name": "BaseBdev1", 00:36:37.886 "aliases": [ 00:36:37.886 "6860bdf2-8bda-4cec-8b03-e98a88e580e2" 00:36:37.886 ], 00:36:37.886 "product_name": "Malloc disk", 00:36:37.886 "block_size": 512, 00:36:37.886 "num_blocks": 65536, 00:36:37.886 "uuid": "6860bdf2-8bda-4cec-8b03-e98a88e580e2", 00:36:37.886 "assigned_rate_limits": { 00:36:37.886 "rw_ios_per_sec": 0, 00:36:37.886 "rw_mbytes_per_sec": 0, 00:36:37.886 "r_mbytes_per_sec": 0, 00:36:37.886 "w_mbytes_per_sec": 0 00:36:37.886 }, 00:36:37.886 "claimed": true, 00:36:37.886 "claim_type": "exclusive_write", 00:36:37.886 "zoned": false, 00:36:37.886 "supported_io_types": { 00:36:37.886 "read": true, 00:36:37.886 "write": true, 00:36:37.886 "unmap": true, 00:36:37.886 "flush": true, 00:36:37.886 "reset": true, 00:36:37.886 "nvme_admin": false, 00:36:37.886 "nvme_io": false, 00:36:37.886 "nvme_io_md": false, 00:36:37.886 "write_zeroes": true, 00:36:37.886 "zcopy": true, 00:36:37.886 "get_zone_info": false, 00:36:37.886 "zone_management": false, 00:36:37.886 "zone_append": false, 00:36:37.886 "compare": false, 00:36:37.886 "compare_and_write": false, 00:36:37.886 "abort": true, 00:36:37.886 "seek_hole": false, 00:36:37.886 "seek_data": false, 00:36:37.886 "copy": true, 00:36:37.886 "nvme_iov_md": false 00:36:37.886 }, 00:36:37.886 "memory_domains": [ 00:36:37.886 { 00:36:37.886 "dma_device_id": "system", 00:36:37.886 "dma_device_type": 1 00:36:37.886 }, 00:36:37.886 { 00:36:37.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.886 "dma_device_type": 2 00:36:37.886 } 00:36:37.886 ], 00:36:37.886 "driver_specific": {} 00:36:37.886 } 00:36:37.886 ] 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.886 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.886 "name": "Existed_Raid", 00:36:37.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.886 "strip_size_kb": 64, 00:36:37.886 "state": "configuring", 00:36:37.887 "raid_level": "concat", 00:36:37.887 "superblock": false, 00:36:37.887 "num_base_bdevs": 3, 00:36:37.887 "num_base_bdevs_discovered": 1, 00:36:37.887 "num_base_bdevs_operational": 3, 00:36:37.887 "base_bdevs_list": [ 00:36:37.887 { 00:36:37.887 "name": "BaseBdev1", 00:36:37.887 "uuid": "6860bdf2-8bda-4cec-8b03-e98a88e580e2", 00:36:37.887 "is_configured": true, 00:36:37.887 "data_offset": 0, 00:36:37.887 "data_size": 65536 00:36:37.887 }, 00:36:37.887 { 00:36:37.887 "name": "BaseBdev2", 00:36:37.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.887 "is_configured": false, 00:36:37.887 "data_offset": 0, 00:36:37.887 "data_size": 0 00:36:37.887 }, 00:36:37.887 { 00:36:37.887 "name": "BaseBdev3", 00:36:37.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.887 "is_configured": false, 00:36:37.887 "data_offset": 0, 00:36:37.887 "data_size": 0 00:36:37.887 } 00:36:37.887 ] 00:36:37.887 }' 00:36:37.887 05:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.887 05:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.454 [2024-12-09 05:27:25.157722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:38.454 [2024-12-09 05:27:25.157971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.454 [2024-12-09 05:27:25.169763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:38.454 [2024-12-09 05:27:25.172674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:38.454 [2024-12-09 05:27:25.172945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:38.454 [2024-12-09 05:27:25.172975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:38.454 [2024-12-09 05:27:25.172993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:38.454 "name": "Existed_Raid", 00:36:38.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.454 "strip_size_kb": 64, 00:36:38.454 "state": "configuring", 00:36:38.454 "raid_level": "concat", 00:36:38.454 "superblock": false, 00:36:38.454 "num_base_bdevs": 3, 00:36:38.454 "num_base_bdevs_discovered": 1, 00:36:38.454 "num_base_bdevs_operational": 3, 00:36:38.454 "base_bdevs_list": [ 00:36:38.454 { 00:36:38.454 "name": "BaseBdev1", 00:36:38.454 "uuid": "6860bdf2-8bda-4cec-8b03-e98a88e580e2", 00:36:38.454 "is_configured": true, 00:36:38.454 "data_offset": 0, 00:36:38.454 "data_size": 65536 00:36:38.454 }, 00:36:38.454 { 00:36:38.454 "name": "BaseBdev2", 00:36:38.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.454 "is_configured": false, 00:36:38.454 "data_offset": 0, 00:36:38.454 "data_size": 0 00:36:38.454 }, 00:36:38.454 { 00:36:38.454 "name": "BaseBdev3", 00:36:38.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.454 "is_configured": false, 00:36:38.454 "data_offset": 0, 00:36:38.454 "data_size": 0 00:36:38.454 } 00:36:38.454 ] 00:36:38.454 }' 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:38.454 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.018 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:39.018 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.018 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.018 [2024-12-09 05:27:25.734362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:39.018 BaseBdev2 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.019 [ 00:36:39.019 { 00:36:39.019 "name": "BaseBdev2", 00:36:39.019 "aliases": [ 00:36:39.019 "f697dfbf-9360-4812-a0b3-8227fcba5133" 00:36:39.019 ], 00:36:39.019 "product_name": "Malloc disk", 00:36:39.019 "block_size": 512, 00:36:39.019 "num_blocks": 65536, 00:36:39.019 "uuid": "f697dfbf-9360-4812-a0b3-8227fcba5133", 00:36:39.019 "assigned_rate_limits": { 00:36:39.019 "rw_ios_per_sec": 0, 00:36:39.019 "rw_mbytes_per_sec": 0, 00:36:39.019 "r_mbytes_per_sec": 0, 00:36:39.019 "w_mbytes_per_sec": 0 00:36:39.019 }, 00:36:39.019 "claimed": true, 00:36:39.019 "claim_type": "exclusive_write", 00:36:39.019 "zoned": false, 00:36:39.019 "supported_io_types": { 00:36:39.019 "read": true, 00:36:39.019 "write": true, 00:36:39.019 "unmap": true, 00:36:39.019 "flush": true, 00:36:39.019 "reset": true, 00:36:39.019 "nvme_admin": false, 00:36:39.019 "nvme_io": false, 00:36:39.019 "nvme_io_md": false, 00:36:39.019 "write_zeroes": true, 00:36:39.019 "zcopy": true, 00:36:39.019 "get_zone_info": false, 00:36:39.019 "zone_management": false, 00:36:39.019 "zone_append": false, 00:36:39.019 "compare": false, 00:36:39.019 "compare_and_write": false, 00:36:39.019 "abort": true, 00:36:39.019 "seek_hole": false, 00:36:39.019 "seek_data": false, 00:36:39.019 "copy": true, 00:36:39.019 "nvme_iov_md": false 00:36:39.019 }, 00:36:39.019 "memory_domains": [ 00:36:39.019 { 00:36:39.019 "dma_device_id": "system", 00:36:39.019 "dma_device_type": 1 00:36:39.019 }, 00:36:39.019 { 00:36:39.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.019 "dma_device_type": 2 00:36:39.019 } 00:36:39.019 ], 00:36:39.019 "driver_specific": {} 00:36:39.019 } 00:36:39.019 ] 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:39.019 "name": "Existed_Raid", 00:36:39.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.019 "strip_size_kb": 64, 00:36:39.019 "state": "configuring", 00:36:39.019 "raid_level": "concat", 00:36:39.019 "superblock": false, 00:36:39.019 "num_base_bdevs": 3, 00:36:39.019 "num_base_bdevs_discovered": 2, 00:36:39.019 "num_base_bdevs_operational": 3, 00:36:39.019 "base_bdevs_list": [ 00:36:39.019 { 00:36:39.019 "name": "BaseBdev1", 00:36:39.019 "uuid": "6860bdf2-8bda-4cec-8b03-e98a88e580e2", 00:36:39.019 "is_configured": true, 00:36:39.019 "data_offset": 0, 00:36:39.019 "data_size": 65536 00:36:39.019 }, 00:36:39.019 { 00:36:39.019 "name": "BaseBdev2", 00:36:39.019 "uuid": "f697dfbf-9360-4812-a0b3-8227fcba5133", 00:36:39.019 "is_configured": true, 00:36:39.019 "data_offset": 0, 00:36:39.019 "data_size": 65536 00:36:39.019 }, 00:36:39.019 { 00:36:39.019 "name": "BaseBdev3", 00:36:39.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.019 "is_configured": false, 00:36:39.019 "data_offset": 0, 00:36:39.019 "data_size": 0 00:36:39.019 } 00:36:39.019 ] 00:36:39.019 }' 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:39.019 05:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.585 [2024-12-09 05:27:26.352830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:39.585 [2024-12-09 05:27:26.352915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:39.585 [2024-12-09 05:27:26.352936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:39.585 [2024-12-09 05:27:26.353310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:39.585 [2024-12-09 05:27:26.353611] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:39.585 [2024-12-09 05:27:26.353628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:39.585 [2024-12-09 05:27:26.354081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:39.585 BaseBdev3 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.585 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.585 [ 00:36:39.585 { 00:36:39.585 "name": "BaseBdev3", 00:36:39.585 "aliases": [ 00:36:39.585 "a6a1a79f-2fb2-46e6-9533-d0746bbb97cc" 00:36:39.585 ], 00:36:39.585 "product_name": "Malloc disk", 00:36:39.585 "block_size": 512, 00:36:39.585 "num_blocks": 65536, 00:36:39.585 "uuid": "a6a1a79f-2fb2-46e6-9533-d0746bbb97cc", 00:36:39.585 "assigned_rate_limits": { 00:36:39.585 "rw_ios_per_sec": 0, 00:36:39.585 "rw_mbytes_per_sec": 0, 00:36:39.585 "r_mbytes_per_sec": 0, 00:36:39.585 "w_mbytes_per_sec": 0 00:36:39.585 }, 00:36:39.585 "claimed": true, 00:36:39.585 "claim_type": "exclusive_write", 00:36:39.585 "zoned": false, 00:36:39.585 "supported_io_types": { 00:36:39.585 "read": true, 00:36:39.585 "write": true, 00:36:39.585 "unmap": true, 00:36:39.585 "flush": true, 00:36:39.585 "reset": true, 00:36:39.585 "nvme_admin": false, 00:36:39.585 "nvme_io": false, 00:36:39.585 "nvme_io_md": false, 00:36:39.585 "write_zeroes": true, 00:36:39.585 "zcopy": true, 00:36:39.585 "get_zone_info": false, 00:36:39.585 "zone_management": false, 00:36:39.585 "zone_append": false, 00:36:39.585 "compare": false, 00:36:39.585 "compare_and_write": false, 00:36:39.585 "abort": true, 00:36:39.585 "seek_hole": false, 00:36:39.585 "seek_data": false, 00:36:39.585 "copy": true, 00:36:39.585 "nvme_iov_md": false 00:36:39.585 }, 00:36:39.585 "memory_domains": [ 00:36:39.585 { 00:36:39.585 "dma_device_id": "system", 00:36:39.585 "dma_device_type": 1 00:36:39.585 }, 00:36:39.585 { 00:36:39.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.585 "dma_device_type": 2 00:36:39.585 } 00:36:39.585 ], 00:36:39.586 "driver_specific": {} 00:36:39.586 } 00:36:39.586 ] 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:39.586 "name": "Existed_Raid", 00:36:39.586 "uuid": "2dbeeb2c-6380-4c85-8159-33b66b28576f", 00:36:39.586 "strip_size_kb": 64, 00:36:39.586 "state": "online", 00:36:39.586 "raid_level": "concat", 00:36:39.586 "superblock": false, 00:36:39.586 "num_base_bdevs": 3, 00:36:39.586 "num_base_bdevs_discovered": 3, 00:36:39.586 "num_base_bdevs_operational": 3, 00:36:39.586 "base_bdevs_list": [ 00:36:39.586 { 00:36:39.586 "name": "BaseBdev1", 00:36:39.586 "uuid": "6860bdf2-8bda-4cec-8b03-e98a88e580e2", 00:36:39.586 "is_configured": true, 00:36:39.586 "data_offset": 0, 00:36:39.586 "data_size": 65536 00:36:39.586 }, 00:36:39.586 { 00:36:39.586 "name": "BaseBdev2", 00:36:39.586 "uuid": "f697dfbf-9360-4812-a0b3-8227fcba5133", 00:36:39.586 "is_configured": true, 00:36:39.586 "data_offset": 0, 00:36:39.586 "data_size": 65536 00:36:39.586 }, 00:36:39.586 { 00:36:39.586 "name": "BaseBdev3", 00:36:39.586 "uuid": "a6a1a79f-2fb2-46e6-9533-d0746bbb97cc", 00:36:39.586 "is_configured": true, 00:36:39.586 "data_offset": 0, 00:36:39.586 "data_size": 65536 00:36:39.586 } 00:36:39.586 ] 00:36:39.586 }' 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:39.586 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.154 [2024-12-09 05:27:26.925456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:40.154 "name": "Existed_Raid", 00:36:40.154 "aliases": [ 00:36:40.154 "2dbeeb2c-6380-4c85-8159-33b66b28576f" 00:36:40.154 ], 00:36:40.154 "product_name": "Raid Volume", 00:36:40.154 "block_size": 512, 00:36:40.154 "num_blocks": 196608, 00:36:40.154 "uuid": "2dbeeb2c-6380-4c85-8159-33b66b28576f", 00:36:40.154 "assigned_rate_limits": { 00:36:40.154 "rw_ios_per_sec": 0, 00:36:40.154 "rw_mbytes_per_sec": 0, 00:36:40.154 "r_mbytes_per_sec": 0, 00:36:40.154 "w_mbytes_per_sec": 0 00:36:40.154 }, 00:36:40.154 "claimed": false, 00:36:40.154 "zoned": false, 00:36:40.154 "supported_io_types": { 00:36:40.154 "read": true, 00:36:40.154 "write": true, 00:36:40.154 "unmap": true, 00:36:40.154 "flush": true, 00:36:40.154 "reset": true, 00:36:40.154 "nvme_admin": false, 00:36:40.154 "nvme_io": false, 00:36:40.154 "nvme_io_md": false, 00:36:40.154 "write_zeroes": true, 00:36:40.154 "zcopy": false, 00:36:40.154 "get_zone_info": false, 00:36:40.154 "zone_management": false, 00:36:40.154 "zone_append": false, 00:36:40.154 "compare": false, 00:36:40.154 "compare_and_write": false, 00:36:40.154 "abort": false, 00:36:40.154 "seek_hole": false, 00:36:40.154 "seek_data": false, 00:36:40.154 "copy": false, 00:36:40.154 "nvme_iov_md": false 00:36:40.154 }, 00:36:40.154 "memory_domains": [ 00:36:40.154 { 00:36:40.154 "dma_device_id": "system", 00:36:40.154 "dma_device_type": 1 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:40.154 "dma_device_type": 2 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "dma_device_id": "system", 00:36:40.154 "dma_device_type": 1 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:40.154 "dma_device_type": 2 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "dma_device_id": "system", 00:36:40.154 "dma_device_type": 1 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:40.154 "dma_device_type": 2 00:36:40.154 } 00:36:40.154 ], 00:36:40.154 "driver_specific": { 00:36:40.154 "raid": { 00:36:40.154 "uuid": "2dbeeb2c-6380-4c85-8159-33b66b28576f", 00:36:40.154 "strip_size_kb": 64, 00:36:40.154 "state": "online", 00:36:40.154 "raid_level": "concat", 00:36:40.154 "superblock": false, 00:36:40.154 "num_base_bdevs": 3, 00:36:40.154 "num_base_bdevs_discovered": 3, 00:36:40.154 "num_base_bdevs_operational": 3, 00:36:40.154 "base_bdevs_list": [ 00:36:40.154 { 00:36:40.154 "name": "BaseBdev1", 00:36:40.154 "uuid": "6860bdf2-8bda-4cec-8b03-e98a88e580e2", 00:36:40.154 "is_configured": true, 00:36:40.154 "data_offset": 0, 00:36:40.154 "data_size": 65536 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "name": "BaseBdev2", 00:36:40.154 "uuid": "f697dfbf-9360-4812-a0b3-8227fcba5133", 00:36:40.154 "is_configured": true, 00:36:40.154 "data_offset": 0, 00:36:40.154 "data_size": 65536 00:36:40.154 }, 00:36:40.154 { 00:36:40.154 "name": "BaseBdev3", 00:36:40.154 "uuid": "a6a1a79f-2fb2-46e6-9533-d0746bbb97cc", 00:36:40.154 "is_configured": true, 00:36:40.154 "data_offset": 0, 00:36:40.154 "data_size": 65536 00:36:40.154 } 00:36:40.154 ] 00:36:40.154 } 00:36:40.154 } 00:36:40.154 }' 00:36:40.154 05:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:40.154 BaseBdev2 00:36:40.154 BaseBdev3' 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.154 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:40.412 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.413 [2024-12-09 05:27:27.245180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:40.413 [2024-12-09 05:27:27.245210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:40.413 [2024-12-09 05:27:27.245276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.413 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.671 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:40.671 "name": "Existed_Raid", 00:36:40.671 "uuid": "2dbeeb2c-6380-4c85-8159-33b66b28576f", 00:36:40.671 "strip_size_kb": 64, 00:36:40.671 "state": "offline", 00:36:40.671 "raid_level": "concat", 00:36:40.671 "superblock": false, 00:36:40.671 "num_base_bdevs": 3, 00:36:40.671 "num_base_bdevs_discovered": 2, 00:36:40.671 "num_base_bdevs_operational": 2, 00:36:40.671 "base_bdevs_list": [ 00:36:40.671 { 00:36:40.671 "name": null, 00:36:40.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.671 "is_configured": false, 00:36:40.671 "data_offset": 0, 00:36:40.671 "data_size": 65536 00:36:40.671 }, 00:36:40.671 { 00:36:40.671 "name": "BaseBdev2", 00:36:40.671 "uuid": "f697dfbf-9360-4812-a0b3-8227fcba5133", 00:36:40.671 "is_configured": true, 00:36:40.671 "data_offset": 0, 00:36:40.671 "data_size": 65536 00:36:40.671 }, 00:36:40.671 { 00:36:40.671 "name": "BaseBdev3", 00:36:40.671 "uuid": "a6a1a79f-2fb2-46e6-9533-d0746bbb97cc", 00:36:40.671 "is_configured": true, 00:36:40.671 "data_offset": 0, 00:36:40.671 "data_size": 65536 00:36:40.671 } 00:36:40.671 ] 00:36:40.671 }' 00:36:40.671 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:40.671 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.929 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.186 [2024-12-09 05:27:27.904789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:41.186 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:41.187 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.187 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.187 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.187 05:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:41.187 05:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.187 [2024-12-09 05:27:28.042152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:41.187 [2024-12-09 05:27:28.042220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.187 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 BaseBdev2 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 [ 00:36:41.445 { 00:36:41.445 "name": "BaseBdev2", 00:36:41.445 "aliases": [ 00:36:41.445 "28e01940-0e4a-43ce-b4bd-4b5c96b17f86" 00:36:41.445 ], 00:36:41.445 "product_name": "Malloc disk", 00:36:41.445 "block_size": 512, 00:36:41.445 "num_blocks": 65536, 00:36:41.445 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:41.445 "assigned_rate_limits": { 00:36:41.445 "rw_ios_per_sec": 0, 00:36:41.445 "rw_mbytes_per_sec": 0, 00:36:41.445 "r_mbytes_per_sec": 0, 00:36:41.445 "w_mbytes_per_sec": 0 00:36:41.445 }, 00:36:41.445 "claimed": false, 00:36:41.445 "zoned": false, 00:36:41.445 "supported_io_types": { 00:36:41.445 "read": true, 00:36:41.445 "write": true, 00:36:41.445 "unmap": true, 00:36:41.445 "flush": true, 00:36:41.445 "reset": true, 00:36:41.445 "nvme_admin": false, 00:36:41.445 "nvme_io": false, 00:36:41.445 "nvme_io_md": false, 00:36:41.445 "write_zeroes": true, 00:36:41.445 "zcopy": true, 00:36:41.445 "get_zone_info": false, 00:36:41.445 "zone_management": false, 00:36:41.445 "zone_append": false, 00:36:41.445 "compare": false, 00:36:41.445 "compare_and_write": false, 00:36:41.445 "abort": true, 00:36:41.445 "seek_hole": false, 00:36:41.445 "seek_data": false, 00:36:41.445 "copy": true, 00:36:41.445 "nvme_iov_md": false 00:36:41.445 }, 00:36:41.445 "memory_domains": [ 00:36:41.445 { 00:36:41.445 "dma_device_id": "system", 00:36:41.445 "dma_device_type": 1 00:36:41.445 }, 00:36:41.445 { 00:36:41.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:41.445 "dma_device_type": 2 00:36:41.445 } 00:36:41.445 ], 00:36:41.445 "driver_specific": {} 00:36:41.445 } 00:36:41.445 ] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 BaseBdev3 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 [ 00:36:41.445 { 00:36:41.445 "name": "BaseBdev3", 00:36:41.445 "aliases": [ 00:36:41.445 "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5" 00:36:41.445 ], 00:36:41.445 "product_name": "Malloc disk", 00:36:41.445 "block_size": 512, 00:36:41.445 "num_blocks": 65536, 00:36:41.445 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:41.445 "assigned_rate_limits": { 00:36:41.445 "rw_ios_per_sec": 0, 00:36:41.445 "rw_mbytes_per_sec": 0, 00:36:41.445 "r_mbytes_per_sec": 0, 00:36:41.445 "w_mbytes_per_sec": 0 00:36:41.445 }, 00:36:41.445 "claimed": false, 00:36:41.445 "zoned": false, 00:36:41.445 "supported_io_types": { 00:36:41.445 "read": true, 00:36:41.445 "write": true, 00:36:41.445 "unmap": true, 00:36:41.445 "flush": true, 00:36:41.445 "reset": true, 00:36:41.445 "nvme_admin": false, 00:36:41.445 "nvme_io": false, 00:36:41.445 "nvme_io_md": false, 00:36:41.445 "write_zeroes": true, 00:36:41.445 "zcopy": true, 00:36:41.445 "get_zone_info": false, 00:36:41.445 "zone_management": false, 00:36:41.445 "zone_append": false, 00:36:41.445 "compare": false, 00:36:41.445 "compare_and_write": false, 00:36:41.445 "abort": true, 00:36:41.445 "seek_hole": false, 00:36:41.445 "seek_data": false, 00:36:41.445 "copy": true, 00:36:41.445 "nvme_iov_md": false 00:36:41.445 }, 00:36:41.445 "memory_domains": [ 00:36:41.445 { 00:36:41.445 "dma_device_id": "system", 00:36:41.445 "dma_device_type": 1 00:36:41.445 }, 00:36:41.445 { 00:36:41.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:41.445 "dma_device_type": 2 00:36:41.445 } 00:36:41.445 ], 00:36:41.445 "driver_specific": {} 00:36:41.445 } 00:36:41.445 ] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 [2024-12-09 05:27:28.340925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:41.445 [2024-12-09 05:27:28.341149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:41.445 [2024-12-09 05:27:28.341280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:41.445 [2024-12-09 05:27:28.343752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:41.445 "name": "Existed_Raid", 00:36:41.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.445 "strip_size_kb": 64, 00:36:41.445 "state": "configuring", 00:36:41.445 "raid_level": "concat", 00:36:41.445 "superblock": false, 00:36:41.445 "num_base_bdevs": 3, 00:36:41.445 "num_base_bdevs_discovered": 2, 00:36:41.445 "num_base_bdevs_operational": 3, 00:36:41.445 "base_bdevs_list": [ 00:36:41.445 { 00:36:41.445 "name": "BaseBdev1", 00:36:41.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.445 "is_configured": false, 00:36:41.445 "data_offset": 0, 00:36:41.445 "data_size": 0 00:36:41.445 }, 00:36:41.445 { 00:36:41.445 "name": "BaseBdev2", 00:36:41.445 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:41.445 "is_configured": true, 00:36:41.445 "data_offset": 0, 00:36:41.445 "data_size": 65536 00:36:41.445 }, 00:36:41.445 { 00:36:41.445 "name": "BaseBdev3", 00:36:41.445 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:41.445 "is_configured": true, 00:36:41.445 "data_offset": 0, 00:36:41.445 "data_size": 65536 00:36:41.445 } 00:36:41.445 ] 00:36:41.445 }' 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:41.445 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.012 [2024-12-09 05:27:28.865208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.012 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:42.012 "name": "Existed_Raid", 00:36:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:42.012 "strip_size_kb": 64, 00:36:42.012 "state": "configuring", 00:36:42.012 "raid_level": "concat", 00:36:42.012 "superblock": false, 00:36:42.012 "num_base_bdevs": 3, 00:36:42.012 "num_base_bdevs_discovered": 1, 00:36:42.012 "num_base_bdevs_operational": 3, 00:36:42.012 "base_bdevs_list": [ 00:36:42.012 { 00:36:42.012 "name": "BaseBdev1", 00:36:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:42.012 "is_configured": false, 00:36:42.012 "data_offset": 0, 00:36:42.012 "data_size": 0 00:36:42.012 }, 00:36:42.012 { 00:36:42.012 "name": null, 00:36:42.012 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:42.012 "is_configured": false, 00:36:42.012 "data_offset": 0, 00:36:42.012 "data_size": 65536 00:36:42.012 }, 00:36:42.012 { 00:36:42.013 "name": "BaseBdev3", 00:36:42.013 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:42.013 "is_configured": true, 00:36:42.013 "data_offset": 0, 00:36:42.013 "data_size": 65536 00:36:42.013 } 00:36:42.013 ] 00:36:42.013 }' 00:36:42.013 05:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:42.013 05:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.578 [2024-12-09 05:27:29.474273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:42.578 BaseBdev1 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.578 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.578 [ 00:36:42.578 { 00:36:42.578 "name": "BaseBdev1", 00:36:42.578 "aliases": [ 00:36:42.578 "794f34c7-259b-456c-be98-52eb9f166bb6" 00:36:42.578 ], 00:36:42.578 "product_name": "Malloc disk", 00:36:42.578 "block_size": 512, 00:36:42.578 "num_blocks": 65536, 00:36:42.579 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:42.579 "assigned_rate_limits": { 00:36:42.579 "rw_ios_per_sec": 0, 00:36:42.579 "rw_mbytes_per_sec": 0, 00:36:42.579 "r_mbytes_per_sec": 0, 00:36:42.579 "w_mbytes_per_sec": 0 00:36:42.579 }, 00:36:42.579 "claimed": true, 00:36:42.579 "claim_type": "exclusive_write", 00:36:42.579 "zoned": false, 00:36:42.579 "supported_io_types": { 00:36:42.579 "read": true, 00:36:42.579 "write": true, 00:36:42.579 "unmap": true, 00:36:42.579 "flush": true, 00:36:42.579 "reset": true, 00:36:42.579 "nvme_admin": false, 00:36:42.579 "nvme_io": false, 00:36:42.579 "nvme_io_md": false, 00:36:42.579 "write_zeroes": true, 00:36:42.579 "zcopy": true, 00:36:42.579 "get_zone_info": false, 00:36:42.579 "zone_management": false, 00:36:42.579 "zone_append": false, 00:36:42.579 "compare": false, 00:36:42.579 "compare_and_write": false, 00:36:42.579 "abort": true, 00:36:42.579 "seek_hole": false, 00:36:42.579 "seek_data": false, 00:36:42.579 "copy": true, 00:36:42.579 "nvme_iov_md": false 00:36:42.579 }, 00:36:42.579 "memory_domains": [ 00:36:42.579 { 00:36:42.579 "dma_device_id": "system", 00:36:42.579 "dma_device_type": 1 00:36:42.579 }, 00:36:42.579 { 00:36:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:42.579 "dma_device_type": 2 00:36:42.579 } 00:36:42.579 ], 00:36:42.579 "driver_specific": {} 00:36:42.579 } 00:36:42.579 ] 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.579 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.836 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:42.836 "name": "Existed_Raid", 00:36:42.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:42.836 "strip_size_kb": 64, 00:36:42.836 "state": "configuring", 00:36:42.836 "raid_level": "concat", 00:36:42.836 "superblock": false, 00:36:42.836 "num_base_bdevs": 3, 00:36:42.836 "num_base_bdevs_discovered": 2, 00:36:42.836 "num_base_bdevs_operational": 3, 00:36:42.836 "base_bdevs_list": [ 00:36:42.836 { 00:36:42.836 "name": "BaseBdev1", 00:36:42.836 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:42.837 "is_configured": true, 00:36:42.837 "data_offset": 0, 00:36:42.837 "data_size": 65536 00:36:42.837 }, 00:36:42.837 { 00:36:42.837 "name": null, 00:36:42.837 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:42.837 "is_configured": false, 00:36:42.837 "data_offset": 0, 00:36:42.837 "data_size": 65536 00:36:42.837 }, 00:36:42.837 { 00:36:42.837 "name": "BaseBdev3", 00:36:42.837 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:42.837 "is_configured": true, 00:36:42.837 "data_offset": 0, 00:36:42.837 "data_size": 65536 00:36:42.837 } 00:36:42.837 ] 00:36:42.837 }' 00:36:42.837 05:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:42.837 05:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.095 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:43.095 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.095 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.095 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.095 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.352 [2024-12-09 05:27:30.090666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.352 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:43.352 "name": "Existed_Raid", 00:36:43.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.352 "strip_size_kb": 64, 00:36:43.352 "state": "configuring", 00:36:43.352 "raid_level": "concat", 00:36:43.352 "superblock": false, 00:36:43.352 "num_base_bdevs": 3, 00:36:43.352 "num_base_bdevs_discovered": 1, 00:36:43.352 "num_base_bdevs_operational": 3, 00:36:43.352 "base_bdevs_list": [ 00:36:43.352 { 00:36:43.352 "name": "BaseBdev1", 00:36:43.352 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:43.352 "is_configured": true, 00:36:43.352 "data_offset": 0, 00:36:43.352 "data_size": 65536 00:36:43.352 }, 00:36:43.352 { 00:36:43.352 "name": null, 00:36:43.353 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:43.353 "is_configured": false, 00:36:43.353 "data_offset": 0, 00:36:43.353 "data_size": 65536 00:36:43.353 }, 00:36:43.353 { 00:36:43.353 "name": null, 00:36:43.353 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:43.353 "is_configured": false, 00:36:43.353 "data_offset": 0, 00:36:43.353 "data_size": 65536 00:36:43.353 } 00:36:43.353 ] 00:36:43.353 }' 00:36:43.353 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:43.353 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.919 [2024-12-09 05:27:30.674824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:43.919 "name": "Existed_Raid", 00:36:43.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.919 "strip_size_kb": 64, 00:36:43.919 "state": "configuring", 00:36:43.919 "raid_level": "concat", 00:36:43.919 "superblock": false, 00:36:43.919 "num_base_bdevs": 3, 00:36:43.919 "num_base_bdevs_discovered": 2, 00:36:43.919 "num_base_bdevs_operational": 3, 00:36:43.919 "base_bdevs_list": [ 00:36:43.919 { 00:36:43.919 "name": "BaseBdev1", 00:36:43.919 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:43.919 "is_configured": true, 00:36:43.919 "data_offset": 0, 00:36:43.919 "data_size": 65536 00:36:43.919 }, 00:36:43.919 { 00:36:43.919 "name": null, 00:36:43.919 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:43.919 "is_configured": false, 00:36:43.919 "data_offset": 0, 00:36:43.919 "data_size": 65536 00:36:43.919 }, 00:36:43.919 { 00:36:43.919 "name": "BaseBdev3", 00:36:43.919 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:43.919 "is_configured": true, 00:36:43.919 "data_offset": 0, 00:36:43.919 "data_size": 65536 00:36:43.919 } 00:36:43.919 ] 00:36:43.919 }' 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:43.919 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.486 [2024-12-09 05:27:31.259009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.486 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:44.486 "name": "Existed_Raid", 00:36:44.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.486 "strip_size_kb": 64, 00:36:44.486 "state": "configuring", 00:36:44.487 "raid_level": "concat", 00:36:44.487 "superblock": false, 00:36:44.487 "num_base_bdevs": 3, 00:36:44.487 "num_base_bdevs_discovered": 1, 00:36:44.487 "num_base_bdevs_operational": 3, 00:36:44.487 "base_bdevs_list": [ 00:36:44.487 { 00:36:44.487 "name": null, 00:36:44.487 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:44.487 "is_configured": false, 00:36:44.487 "data_offset": 0, 00:36:44.487 "data_size": 65536 00:36:44.487 }, 00:36:44.487 { 00:36:44.487 "name": null, 00:36:44.487 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:44.487 "is_configured": false, 00:36:44.487 "data_offset": 0, 00:36:44.487 "data_size": 65536 00:36:44.487 }, 00:36:44.487 { 00:36:44.487 "name": "BaseBdev3", 00:36:44.487 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:44.487 "is_configured": true, 00:36:44.487 "data_offset": 0, 00:36:44.487 "data_size": 65536 00:36:44.487 } 00:36:44.487 ] 00:36:44.487 }' 00:36:44.487 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:44.487 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:45.054 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.055 [2024-12-09 05:27:31.882607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:45.055 "name": "Existed_Raid", 00:36:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:45.055 "strip_size_kb": 64, 00:36:45.055 "state": "configuring", 00:36:45.055 "raid_level": "concat", 00:36:45.055 "superblock": false, 00:36:45.055 "num_base_bdevs": 3, 00:36:45.055 "num_base_bdevs_discovered": 2, 00:36:45.055 "num_base_bdevs_operational": 3, 00:36:45.055 "base_bdevs_list": [ 00:36:45.055 { 00:36:45.055 "name": null, 00:36:45.055 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:45.055 "is_configured": false, 00:36:45.055 "data_offset": 0, 00:36:45.055 "data_size": 65536 00:36:45.055 }, 00:36:45.055 { 00:36:45.055 "name": "BaseBdev2", 00:36:45.055 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:45.055 "is_configured": true, 00:36:45.055 "data_offset": 0, 00:36:45.055 "data_size": 65536 00:36:45.055 }, 00:36:45.055 { 00:36:45.055 "name": "BaseBdev3", 00:36:45.055 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:45.055 "is_configured": true, 00:36:45.055 "data_offset": 0, 00:36:45.055 "data_size": 65536 00:36:45.055 } 00:36:45.055 ] 00:36:45.055 }' 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:45.055 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 794f34c7-259b-456c-be98-52eb9f166bb6 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.623 [2024-12-09 05:27:32.569287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:45.623 [2024-12-09 05:27:32.569338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:45.623 [2024-12-09 05:27:32.569352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:45.623 [2024-12-09 05:27:32.569630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:45.623 [2024-12-09 05:27:32.569866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:45.623 [2024-12-09 05:27:32.569883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:45.623 [2024-12-09 05:27:32.570247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:45.623 NewBaseBdev 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.623 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.882 [ 00:36:45.882 { 00:36:45.882 "name": "NewBaseBdev", 00:36:45.882 "aliases": [ 00:36:45.882 "794f34c7-259b-456c-be98-52eb9f166bb6" 00:36:45.882 ], 00:36:45.882 "product_name": "Malloc disk", 00:36:45.882 "block_size": 512, 00:36:45.882 "num_blocks": 65536, 00:36:45.882 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:45.882 "assigned_rate_limits": { 00:36:45.882 "rw_ios_per_sec": 0, 00:36:45.882 "rw_mbytes_per_sec": 0, 00:36:45.882 "r_mbytes_per_sec": 0, 00:36:45.882 "w_mbytes_per_sec": 0 00:36:45.882 }, 00:36:45.882 "claimed": true, 00:36:45.882 "claim_type": "exclusive_write", 00:36:45.882 "zoned": false, 00:36:45.882 "supported_io_types": { 00:36:45.882 "read": true, 00:36:45.882 "write": true, 00:36:45.882 "unmap": true, 00:36:45.882 "flush": true, 00:36:45.882 "reset": true, 00:36:45.882 "nvme_admin": false, 00:36:45.882 "nvme_io": false, 00:36:45.882 "nvme_io_md": false, 00:36:45.882 "write_zeroes": true, 00:36:45.882 "zcopy": true, 00:36:45.882 "get_zone_info": false, 00:36:45.882 "zone_management": false, 00:36:45.882 "zone_append": false, 00:36:45.882 "compare": false, 00:36:45.882 "compare_and_write": false, 00:36:45.882 "abort": true, 00:36:45.882 "seek_hole": false, 00:36:45.882 "seek_data": false, 00:36:45.882 "copy": true, 00:36:45.882 "nvme_iov_md": false 00:36:45.882 }, 00:36:45.882 "memory_domains": [ 00:36:45.882 { 00:36:45.882 "dma_device_id": "system", 00:36:45.882 "dma_device_type": 1 00:36:45.882 }, 00:36:45.882 { 00:36:45.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:45.882 "dma_device_type": 2 00:36:45.882 } 00:36:45.882 ], 00:36:45.882 "driver_specific": {} 00:36:45.882 } 00:36:45.882 ] 00:36:45.882 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.882 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:45.883 "name": "Existed_Raid", 00:36:45.883 "uuid": "8b32fe75-2ac0-4d6e-8731-4884773a5464", 00:36:45.883 "strip_size_kb": 64, 00:36:45.883 "state": "online", 00:36:45.883 "raid_level": "concat", 00:36:45.883 "superblock": false, 00:36:45.883 "num_base_bdevs": 3, 00:36:45.883 "num_base_bdevs_discovered": 3, 00:36:45.883 "num_base_bdevs_operational": 3, 00:36:45.883 "base_bdevs_list": [ 00:36:45.883 { 00:36:45.883 "name": "NewBaseBdev", 00:36:45.883 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:45.883 "is_configured": true, 00:36:45.883 "data_offset": 0, 00:36:45.883 "data_size": 65536 00:36:45.883 }, 00:36:45.883 { 00:36:45.883 "name": "BaseBdev2", 00:36:45.883 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:45.883 "is_configured": true, 00:36:45.883 "data_offset": 0, 00:36:45.883 "data_size": 65536 00:36:45.883 }, 00:36:45.883 { 00:36:45.883 "name": "BaseBdev3", 00:36:45.883 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:45.883 "is_configured": true, 00:36:45.883 "data_offset": 0, 00:36:45.883 "data_size": 65536 00:36:45.883 } 00:36:45.883 ] 00:36:45.883 }' 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:45.883 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.452 [2024-12-09 05:27:33.141748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:46.452 "name": "Existed_Raid", 00:36:46.452 "aliases": [ 00:36:46.452 "8b32fe75-2ac0-4d6e-8731-4884773a5464" 00:36:46.452 ], 00:36:46.452 "product_name": "Raid Volume", 00:36:46.452 "block_size": 512, 00:36:46.452 "num_blocks": 196608, 00:36:46.452 "uuid": "8b32fe75-2ac0-4d6e-8731-4884773a5464", 00:36:46.452 "assigned_rate_limits": { 00:36:46.452 "rw_ios_per_sec": 0, 00:36:46.452 "rw_mbytes_per_sec": 0, 00:36:46.452 "r_mbytes_per_sec": 0, 00:36:46.452 "w_mbytes_per_sec": 0 00:36:46.452 }, 00:36:46.452 "claimed": false, 00:36:46.452 "zoned": false, 00:36:46.452 "supported_io_types": { 00:36:46.452 "read": true, 00:36:46.452 "write": true, 00:36:46.452 "unmap": true, 00:36:46.452 "flush": true, 00:36:46.452 "reset": true, 00:36:46.452 "nvme_admin": false, 00:36:46.452 "nvme_io": false, 00:36:46.452 "nvme_io_md": false, 00:36:46.452 "write_zeroes": true, 00:36:46.452 "zcopy": false, 00:36:46.452 "get_zone_info": false, 00:36:46.452 "zone_management": false, 00:36:46.452 "zone_append": false, 00:36:46.452 "compare": false, 00:36:46.452 "compare_and_write": false, 00:36:46.452 "abort": false, 00:36:46.452 "seek_hole": false, 00:36:46.452 "seek_data": false, 00:36:46.452 "copy": false, 00:36:46.452 "nvme_iov_md": false 00:36:46.452 }, 00:36:46.452 "memory_domains": [ 00:36:46.452 { 00:36:46.452 "dma_device_id": "system", 00:36:46.452 "dma_device_type": 1 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:46.452 "dma_device_type": 2 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "dma_device_id": "system", 00:36:46.452 "dma_device_type": 1 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:46.452 "dma_device_type": 2 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "dma_device_id": "system", 00:36:46.452 "dma_device_type": 1 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:46.452 "dma_device_type": 2 00:36:46.452 } 00:36:46.452 ], 00:36:46.452 "driver_specific": { 00:36:46.452 "raid": { 00:36:46.452 "uuid": "8b32fe75-2ac0-4d6e-8731-4884773a5464", 00:36:46.452 "strip_size_kb": 64, 00:36:46.452 "state": "online", 00:36:46.452 "raid_level": "concat", 00:36:46.452 "superblock": false, 00:36:46.452 "num_base_bdevs": 3, 00:36:46.452 "num_base_bdevs_discovered": 3, 00:36:46.452 "num_base_bdevs_operational": 3, 00:36:46.452 "base_bdevs_list": [ 00:36:46.452 { 00:36:46.452 "name": "NewBaseBdev", 00:36:46.452 "uuid": "794f34c7-259b-456c-be98-52eb9f166bb6", 00:36:46.452 "is_configured": true, 00:36:46.452 "data_offset": 0, 00:36:46.452 "data_size": 65536 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "name": "BaseBdev2", 00:36:46.452 "uuid": "28e01940-0e4a-43ce-b4bd-4b5c96b17f86", 00:36:46.452 "is_configured": true, 00:36:46.452 "data_offset": 0, 00:36:46.452 "data_size": 65536 00:36:46.452 }, 00:36:46.452 { 00:36:46.452 "name": "BaseBdev3", 00:36:46.452 "uuid": "b7ba4bf5-eb6d-47a9-9e5a-7bc59250b8c5", 00:36:46.452 "is_configured": true, 00:36:46.452 "data_offset": 0, 00:36:46.452 "data_size": 65536 00:36:46.452 } 00:36:46.452 ] 00:36:46.452 } 00:36:46.452 } 00:36:46.452 }' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:46.452 BaseBdev2 00:36:46.452 BaseBdev3' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.452 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.712 [2024-12-09 05:27:33.461532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:46.712 [2024-12-09 05:27:33.461599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:46.712 [2024-12-09 05:27:33.461702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:46.712 [2024-12-09 05:27:33.461798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:46.712 [2024-12-09 05:27:33.461820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65675 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65675 ']' 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65675 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65675 00:36:46.712 killing process with pid 65675 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65675' 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65675 00:36:46.712 [2024-12-09 05:27:33.503009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:46.712 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65675 00:36:46.971 [2024-12-09 05:27:33.769859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:36:48.348 ************************************ 00:36:48.348 END TEST raid_state_function_test 00:36:48.348 ************************************ 00:36:48.348 00:36:48.348 real 0m11.884s 00:36:48.348 user 0m19.554s 00:36:48.348 sys 0m1.753s 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.348 05:27:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:36:48.348 05:27:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:48.348 05:27:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.348 05:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:48.348 ************************************ 00:36:48.348 START TEST raid_state_function_test_sb 00:36:48.348 ************************************ 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66313 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66313' 00:36:48.348 Process raid pid: 66313 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66313 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66313 ']' 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.348 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:48.348 [2024-12-09 05:27:35.068245] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:36:48.348 [2024-12-09 05:27:35.068622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.348 [2024-12-09 05:27:35.252302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.607 [2024-12-09 05:27:35.394027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.866 [2024-12-09 05:27:35.609958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:48.866 [2024-12-09 05:27:35.610248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.125 [2024-12-09 05:27:35.991648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:49.125 [2024-12-09 05:27:35.991732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:49.125 [2024-12-09 05:27:35.991750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:49.125 [2024-12-09 05:27:35.991766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:49.125 [2024-12-09 05:27:35.991791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:49.125 [2024-12-09 05:27:35.991824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.125 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.125 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:49.125 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.125 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.125 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:49.125 "name": "Existed_Raid", 00:36:49.125 "uuid": "fcca931e-32d2-4109-985c-cdac14b9d149", 00:36:49.125 "strip_size_kb": 64, 00:36:49.125 "state": "configuring", 00:36:49.125 "raid_level": "concat", 00:36:49.125 "superblock": true, 00:36:49.125 "num_base_bdevs": 3, 00:36:49.125 "num_base_bdevs_discovered": 0, 00:36:49.125 "num_base_bdevs_operational": 3, 00:36:49.125 "base_bdevs_list": [ 00:36:49.125 { 00:36:49.125 "name": "BaseBdev1", 00:36:49.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.125 "is_configured": false, 00:36:49.125 "data_offset": 0, 00:36:49.125 "data_size": 0 00:36:49.125 }, 00:36:49.125 { 00:36:49.125 "name": "BaseBdev2", 00:36:49.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.125 "is_configured": false, 00:36:49.125 "data_offset": 0, 00:36:49.125 "data_size": 0 00:36:49.125 }, 00:36:49.125 { 00:36:49.125 "name": "BaseBdev3", 00:36:49.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.125 "is_configured": false, 00:36:49.125 "data_offset": 0, 00:36:49.125 "data_size": 0 00:36:49.125 } 00:36:49.125 ] 00:36:49.125 }' 00:36:49.125 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:49.125 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.692 [2024-12-09 05:27:36.523733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:49.692 [2024-12-09 05:27:36.523987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.692 [2024-12-09 05:27:36.535735] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:49.692 [2024-12-09 05:27:36.535987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:49.692 [2024-12-09 05:27:36.536131] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:49.692 [2024-12-09 05:27:36.536239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:49.692 [2024-12-09 05:27:36.536479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:49.692 [2024-12-09 05:27:36.536552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.692 [2024-12-09 05:27:36.581282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:49.692 BaseBdev1 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:49.692 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.693 [ 00:36:49.693 { 00:36:49.693 "name": "BaseBdev1", 00:36:49.693 "aliases": [ 00:36:49.693 "f523a030-603b-4870-8c30-031fcefcfa15" 00:36:49.693 ], 00:36:49.693 "product_name": "Malloc disk", 00:36:49.693 "block_size": 512, 00:36:49.693 "num_blocks": 65536, 00:36:49.693 "uuid": "f523a030-603b-4870-8c30-031fcefcfa15", 00:36:49.693 "assigned_rate_limits": { 00:36:49.693 "rw_ios_per_sec": 0, 00:36:49.693 "rw_mbytes_per_sec": 0, 00:36:49.693 "r_mbytes_per_sec": 0, 00:36:49.693 "w_mbytes_per_sec": 0 00:36:49.693 }, 00:36:49.693 "claimed": true, 00:36:49.693 "claim_type": "exclusive_write", 00:36:49.693 "zoned": false, 00:36:49.693 "supported_io_types": { 00:36:49.693 "read": true, 00:36:49.693 "write": true, 00:36:49.693 "unmap": true, 00:36:49.693 "flush": true, 00:36:49.693 "reset": true, 00:36:49.693 "nvme_admin": false, 00:36:49.693 "nvme_io": false, 00:36:49.693 "nvme_io_md": false, 00:36:49.693 "write_zeroes": true, 00:36:49.693 "zcopy": true, 00:36:49.693 "get_zone_info": false, 00:36:49.693 "zone_management": false, 00:36:49.693 "zone_append": false, 00:36:49.693 "compare": false, 00:36:49.693 "compare_and_write": false, 00:36:49.693 "abort": true, 00:36:49.693 "seek_hole": false, 00:36:49.693 "seek_data": false, 00:36:49.693 "copy": true, 00:36:49.693 "nvme_iov_md": false 00:36:49.693 }, 00:36:49.693 "memory_domains": [ 00:36:49.693 { 00:36:49.693 "dma_device_id": "system", 00:36:49.693 "dma_device_type": 1 00:36:49.693 }, 00:36:49.693 { 00:36:49.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:49.693 "dma_device_type": 2 00:36:49.693 } 00:36:49.693 ], 00:36:49.693 "driver_specific": {} 00:36:49.693 } 00:36:49.693 ] 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.693 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.952 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:49.952 "name": "Existed_Raid", 00:36:49.952 "uuid": "39d945e8-b083-41cd-ba4e-15af93bac1bc", 00:36:49.952 "strip_size_kb": 64, 00:36:49.952 "state": "configuring", 00:36:49.952 "raid_level": "concat", 00:36:49.952 "superblock": true, 00:36:49.952 "num_base_bdevs": 3, 00:36:49.952 "num_base_bdevs_discovered": 1, 00:36:49.952 "num_base_bdevs_operational": 3, 00:36:49.952 "base_bdevs_list": [ 00:36:49.952 { 00:36:49.952 "name": "BaseBdev1", 00:36:49.952 "uuid": "f523a030-603b-4870-8c30-031fcefcfa15", 00:36:49.952 "is_configured": true, 00:36:49.952 "data_offset": 2048, 00:36:49.952 "data_size": 63488 00:36:49.952 }, 00:36:49.952 { 00:36:49.952 "name": "BaseBdev2", 00:36:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.952 "is_configured": false, 00:36:49.952 "data_offset": 0, 00:36:49.952 "data_size": 0 00:36:49.952 }, 00:36:49.952 { 00:36:49.952 "name": "BaseBdev3", 00:36:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.952 "is_configured": false, 00:36:49.952 "data_offset": 0, 00:36:49.952 "data_size": 0 00:36:49.952 } 00:36:49.952 ] 00:36:49.952 }' 00:36:49.952 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:49.952 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.210 [2024-12-09 05:27:37.145573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:50.210 [2024-12-09 05:27:37.145641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.210 [2024-12-09 05:27:37.157672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:50.210 [2024-12-09 05:27:37.160349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:50.210 [2024-12-09 05:27:37.160593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:50.210 [2024-12-09 05:27:37.160624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:50.210 [2024-12-09 05:27:37.160643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:50.210 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.467 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.467 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:50.467 "name": "Existed_Raid", 00:36:50.467 "uuid": "40677915-3272-4e05-8cb6-f64d6b239910", 00:36:50.467 "strip_size_kb": 64, 00:36:50.467 "state": "configuring", 00:36:50.467 "raid_level": "concat", 00:36:50.467 "superblock": true, 00:36:50.467 "num_base_bdevs": 3, 00:36:50.467 "num_base_bdevs_discovered": 1, 00:36:50.467 "num_base_bdevs_operational": 3, 00:36:50.467 "base_bdevs_list": [ 00:36:50.467 { 00:36:50.467 "name": "BaseBdev1", 00:36:50.467 "uuid": "f523a030-603b-4870-8c30-031fcefcfa15", 00:36:50.467 "is_configured": true, 00:36:50.467 "data_offset": 2048, 00:36:50.467 "data_size": 63488 00:36:50.467 }, 00:36:50.467 { 00:36:50.467 "name": "BaseBdev2", 00:36:50.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:50.467 "is_configured": false, 00:36:50.467 "data_offset": 0, 00:36:50.467 "data_size": 0 00:36:50.467 }, 00:36:50.467 { 00:36:50.467 "name": "BaseBdev3", 00:36:50.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:50.467 "is_configured": false, 00:36:50.467 "data_offset": 0, 00:36:50.467 "data_size": 0 00:36:50.467 } 00:36:50.467 ] 00:36:50.467 }' 00:36:50.467 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:50.467 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.724 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:50.724 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.724 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.983 [2024-12-09 05:27:37.727082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:50.983 BaseBdev2 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.983 [ 00:36:50.983 { 00:36:50.983 "name": "BaseBdev2", 00:36:50.983 "aliases": [ 00:36:50.983 "8577530a-7814-46a8-a59f-41792fe2dfc3" 00:36:50.983 ], 00:36:50.983 "product_name": "Malloc disk", 00:36:50.983 "block_size": 512, 00:36:50.983 "num_blocks": 65536, 00:36:50.983 "uuid": "8577530a-7814-46a8-a59f-41792fe2dfc3", 00:36:50.983 "assigned_rate_limits": { 00:36:50.983 "rw_ios_per_sec": 0, 00:36:50.983 "rw_mbytes_per_sec": 0, 00:36:50.983 "r_mbytes_per_sec": 0, 00:36:50.983 "w_mbytes_per_sec": 0 00:36:50.983 }, 00:36:50.983 "claimed": true, 00:36:50.983 "claim_type": "exclusive_write", 00:36:50.983 "zoned": false, 00:36:50.983 "supported_io_types": { 00:36:50.983 "read": true, 00:36:50.983 "write": true, 00:36:50.983 "unmap": true, 00:36:50.983 "flush": true, 00:36:50.983 "reset": true, 00:36:50.983 "nvme_admin": false, 00:36:50.983 "nvme_io": false, 00:36:50.983 "nvme_io_md": false, 00:36:50.983 "write_zeroes": true, 00:36:50.983 "zcopy": true, 00:36:50.983 "get_zone_info": false, 00:36:50.983 "zone_management": false, 00:36:50.983 "zone_append": false, 00:36:50.983 "compare": false, 00:36:50.983 "compare_and_write": false, 00:36:50.983 "abort": true, 00:36:50.983 "seek_hole": false, 00:36:50.983 "seek_data": false, 00:36:50.983 "copy": true, 00:36:50.983 "nvme_iov_md": false 00:36:50.983 }, 00:36:50.983 "memory_domains": [ 00:36:50.983 { 00:36:50.983 "dma_device_id": "system", 00:36:50.983 "dma_device_type": 1 00:36:50.983 }, 00:36:50.983 { 00:36:50.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:50.983 "dma_device_type": 2 00:36:50.983 } 00:36:50.983 ], 00:36:50.983 "driver_specific": {} 00:36:50.983 } 00:36:50.983 ] 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:50.983 "name": "Existed_Raid", 00:36:50.983 "uuid": "40677915-3272-4e05-8cb6-f64d6b239910", 00:36:50.983 "strip_size_kb": 64, 00:36:50.983 "state": "configuring", 00:36:50.983 "raid_level": "concat", 00:36:50.983 "superblock": true, 00:36:50.983 "num_base_bdevs": 3, 00:36:50.983 "num_base_bdevs_discovered": 2, 00:36:50.983 "num_base_bdevs_operational": 3, 00:36:50.983 "base_bdevs_list": [ 00:36:50.983 { 00:36:50.983 "name": "BaseBdev1", 00:36:50.983 "uuid": "f523a030-603b-4870-8c30-031fcefcfa15", 00:36:50.983 "is_configured": true, 00:36:50.983 "data_offset": 2048, 00:36:50.983 "data_size": 63488 00:36:50.983 }, 00:36:50.983 { 00:36:50.983 "name": "BaseBdev2", 00:36:50.983 "uuid": "8577530a-7814-46a8-a59f-41792fe2dfc3", 00:36:50.983 "is_configured": true, 00:36:50.983 "data_offset": 2048, 00:36:50.983 "data_size": 63488 00:36:50.983 }, 00:36:50.983 { 00:36:50.983 "name": "BaseBdev3", 00:36:50.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:50.983 "is_configured": false, 00:36:50.983 "data_offset": 0, 00:36:50.983 "data_size": 0 00:36:50.983 } 00:36:50.983 ] 00:36:50.983 }' 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:50.983 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.549 [2024-12-09 05:27:38.327170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:51.549 [2024-12-09 05:27:38.327508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:51.549 [2024-12-09 05:27:38.327537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:51.549 BaseBdev3 00:36:51.549 [2024-12-09 05:27:38.327893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:51.549 [2024-12-09 05:27:38.328157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:51.549 [2024-12-09 05:27:38.328183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:51.549 [2024-12-09 05:27:38.328407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.549 [ 00:36:51.549 { 00:36:51.549 "name": "BaseBdev3", 00:36:51.549 "aliases": [ 00:36:51.549 "d1f71fc9-a5b1-4718-998d-1865caff9362" 00:36:51.549 ], 00:36:51.549 "product_name": "Malloc disk", 00:36:51.549 "block_size": 512, 00:36:51.549 "num_blocks": 65536, 00:36:51.549 "uuid": "d1f71fc9-a5b1-4718-998d-1865caff9362", 00:36:51.549 "assigned_rate_limits": { 00:36:51.549 "rw_ios_per_sec": 0, 00:36:51.549 "rw_mbytes_per_sec": 0, 00:36:51.549 "r_mbytes_per_sec": 0, 00:36:51.549 "w_mbytes_per_sec": 0 00:36:51.549 }, 00:36:51.549 "claimed": true, 00:36:51.549 "claim_type": "exclusive_write", 00:36:51.549 "zoned": false, 00:36:51.549 "supported_io_types": { 00:36:51.549 "read": true, 00:36:51.549 "write": true, 00:36:51.549 "unmap": true, 00:36:51.549 "flush": true, 00:36:51.549 "reset": true, 00:36:51.549 "nvme_admin": false, 00:36:51.549 "nvme_io": false, 00:36:51.549 "nvme_io_md": false, 00:36:51.549 "write_zeroes": true, 00:36:51.549 "zcopy": true, 00:36:51.549 "get_zone_info": false, 00:36:51.549 "zone_management": false, 00:36:51.549 "zone_append": false, 00:36:51.549 "compare": false, 00:36:51.549 "compare_and_write": false, 00:36:51.549 "abort": true, 00:36:51.549 "seek_hole": false, 00:36:51.549 "seek_data": false, 00:36:51.549 "copy": true, 00:36:51.549 "nvme_iov_md": false 00:36:51.549 }, 00:36:51.549 "memory_domains": [ 00:36:51.549 { 00:36:51.549 "dma_device_id": "system", 00:36:51.549 "dma_device_type": 1 00:36:51.549 }, 00:36:51.549 { 00:36:51.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:51.549 "dma_device_type": 2 00:36:51.549 } 00:36:51.549 ], 00:36:51.549 "driver_specific": {} 00:36:51.549 } 00:36:51.549 ] 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.549 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:51.550 "name": "Existed_Raid", 00:36:51.550 "uuid": "40677915-3272-4e05-8cb6-f64d6b239910", 00:36:51.550 "strip_size_kb": 64, 00:36:51.550 "state": "online", 00:36:51.550 "raid_level": "concat", 00:36:51.550 "superblock": true, 00:36:51.550 "num_base_bdevs": 3, 00:36:51.550 "num_base_bdevs_discovered": 3, 00:36:51.550 "num_base_bdevs_operational": 3, 00:36:51.550 "base_bdevs_list": [ 00:36:51.550 { 00:36:51.550 "name": "BaseBdev1", 00:36:51.550 "uuid": "f523a030-603b-4870-8c30-031fcefcfa15", 00:36:51.550 "is_configured": true, 00:36:51.550 "data_offset": 2048, 00:36:51.550 "data_size": 63488 00:36:51.550 }, 00:36:51.550 { 00:36:51.550 "name": "BaseBdev2", 00:36:51.550 "uuid": "8577530a-7814-46a8-a59f-41792fe2dfc3", 00:36:51.550 "is_configured": true, 00:36:51.550 "data_offset": 2048, 00:36:51.550 "data_size": 63488 00:36:51.550 }, 00:36:51.550 { 00:36:51.550 "name": "BaseBdev3", 00:36:51.550 "uuid": "d1f71fc9-a5b1-4718-998d-1865caff9362", 00:36:51.550 "is_configured": true, 00:36:51.550 "data_offset": 2048, 00:36:51.550 "data_size": 63488 00:36:51.550 } 00:36:51.550 ] 00:36:51.550 }' 00:36:51.550 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:51.550 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.143 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.144 [2024-12-09 05:27:38.899738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:52.144 "name": "Existed_Raid", 00:36:52.144 "aliases": [ 00:36:52.144 "40677915-3272-4e05-8cb6-f64d6b239910" 00:36:52.144 ], 00:36:52.144 "product_name": "Raid Volume", 00:36:52.144 "block_size": 512, 00:36:52.144 "num_blocks": 190464, 00:36:52.144 "uuid": "40677915-3272-4e05-8cb6-f64d6b239910", 00:36:52.144 "assigned_rate_limits": { 00:36:52.144 "rw_ios_per_sec": 0, 00:36:52.144 "rw_mbytes_per_sec": 0, 00:36:52.144 "r_mbytes_per_sec": 0, 00:36:52.144 "w_mbytes_per_sec": 0 00:36:52.144 }, 00:36:52.144 "claimed": false, 00:36:52.144 "zoned": false, 00:36:52.144 "supported_io_types": { 00:36:52.144 "read": true, 00:36:52.144 "write": true, 00:36:52.144 "unmap": true, 00:36:52.144 "flush": true, 00:36:52.144 "reset": true, 00:36:52.144 "nvme_admin": false, 00:36:52.144 "nvme_io": false, 00:36:52.144 "nvme_io_md": false, 00:36:52.144 "write_zeroes": true, 00:36:52.144 "zcopy": false, 00:36:52.144 "get_zone_info": false, 00:36:52.144 "zone_management": false, 00:36:52.144 "zone_append": false, 00:36:52.144 "compare": false, 00:36:52.144 "compare_and_write": false, 00:36:52.144 "abort": false, 00:36:52.144 "seek_hole": false, 00:36:52.144 "seek_data": false, 00:36:52.144 "copy": false, 00:36:52.144 "nvme_iov_md": false 00:36:52.144 }, 00:36:52.144 "memory_domains": [ 00:36:52.144 { 00:36:52.144 "dma_device_id": "system", 00:36:52.144 "dma_device_type": 1 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.144 "dma_device_type": 2 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "dma_device_id": "system", 00:36:52.144 "dma_device_type": 1 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.144 "dma_device_type": 2 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "dma_device_id": "system", 00:36:52.144 "dma_device_type": 1 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.144 "dma_device_type": 2 00:36:52.144 } 00:36:52.144 ], 00:36:52.144 "driver_specific": { 00:36:52.144 "raid": { 00:36:52.144 "uuid": "40677915-3272-4e05-8cb6-f64d6b239910", 00:36:52.144 "strip_size_kb": 64, 00:36:52.144 "state": "online", 00:36:52.144 "raid_level": "concat", 00:36:52.144 "superblock": true, 00:36:52.144 "num_base_bdevs": 3, 00:36:52.144 "num_base_bdevs_discovered": 3, 00:36:52.144 "num_base_bdevs_operational": 3, 00:36:52.144 "base_bdevs_list": [ 00:36:52.144 { 00:36:52.144 "name": "BaseBdev1", 00:36:52.144 "uuid": "f523a030-603b-4870-8c30-031fcefcfa15", 00:36:52.144 "is_configured": true, 00:36:52.144 "data_offset": 2048, 00:36:52.144 "data_size": 63488 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "name": "BaseBdev2", 00:36:52.144 "uuid": "8577530a-7814-46a8-a59f-41792fe2dfc3", 00:36:52.144 "is_configured": true, 00:36:52.144 "data_offset": 2048, 00:36:52.144 "data_size": 63488 00:36:52.144 }, 00:36:52.144 { 00:36:52.144 "name": "BaseBdev3", 00:36:52.144 "uuid": "d1f71fc9-a5b1-4718-998d-1865caff9362", 00:36:52.144 "is_configured": true, 00:36:52.144 "data_offset": 2048, 00:36:52.144 "data_size": 63488 00:36:52.144 } 00:36:52.144 ] 00:36:52.144 } 00:36:52.144 } 00:36:52.144 }' 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:52.144 BaseBdev2 00:36:52.144 BaseBdev3' 00:36:52.144 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.144 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.402 [2024-12-09 05:27:39.219487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:52.402 [2024-12-09 05:27:39.219515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:52.402 [2024-12-09 05:27:39.219580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:52.402 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.403 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.403 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:52.403 "name": "Existed_Raid", 00:36:52.403 "uuid": "40677915-3272-4e05-8cb6-f64d6b239910", 00:36:52.403 "strip_size_kb": 64, 00:36:52.403 "state": "offline", 00:36:52.403 "raid_level": "concat", 00:36:52.403 "superblock": true, 00:36:52.403 "num_base_bdevs": 3, 00:36:52.403 "num_base_bdevs_discovered": 2, 00:36:52.403 "num_base_bdevs_operational": 2, 00:36:52.403 "base_bdevs_list": [ 00:36:52.403 { 00:36:52.403 "name": null, 00:36:52.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.403 "is_configured": false, 00:36:52.403 "data_offset": 0, 00:36:52.403 "data_size": 63488 00:36:52.403 }, 00:36:52.403 { 00:36:52.403 "name": "BaseBdev2", 00:36:52.403 "uuid": "8577530a-7814-46a8-a59f-41792fe2dfc3", 00:36:52.403 "is_configured": true, 00:36:52.403 "data_offset": 2048, 00:36:52.403 "data_size": 63488 00:36:52.403 }, 00:36:52.403 { 00:36:52.403 "name": "BaseBdev3", 00:36:52.403 "uuid": "d1f71fc9-a5b1-4718-998d-1865caff9362", 00:36:52.403 "is_configured": true, 00:36:52.403 "data_offset": 2048, 00:36:52.403 "data_size": 63488 00:36:52.403 } 00:36:52.403 ] 00:36:52.403 }' 00:36:52.403 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:52.403 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.984 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.984 [2024-12-09 05:27:39.893897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:53.242 05:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.242 [2024-12-09 05:27:40.035661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:53.242 [2024-12-09 05:27:40.035758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.242 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.500 BaseBdev2 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.500 [ 00:36:53.500 { 00:36:53.500 "name": "BaseBdev2", 00:36:53.500 "aliases": [ 00:36:53.500 "a0d83e54-c7e0-476b-9549-710995555f18" 00:36:53.500 ], 00:36:53.500 "product_name": "Malloc disk", 00:36:53.500 "block_size": 512, 00:36:53.500 "num_blocks": 65536, 00:36:53.500 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:53.500 "assigned_rate_limits": { 00:36:53.500 "rw_ios_per_sec": 0, 00:36:53.500 "rw_mbytes_per_sec": 0, 00:36:53.500 "r_mbytes_per_sec": 0, 00:36:53.500 "w_mbytes_per_sec": 0 00:36:53.500 }, 00:36:53.500 "claimed": false, 00:36:53.500 "zoned": false, 00:36:53.500 "supported_io_types": { 00:36:53.500 "read": true, 00:36:53.500 "write": true, 00:36:53.500 "unmap": true, 00:36:53.500 "flush": true, 00:36:53.500 "reset": true, 00:36:53.500 "nvme_admin": false, 00:36:53.500 "nvme_io": false, 00:36:53.500 "nvme_io_md": false, 00:36:53.500 "write_zeroes": true, 00:36:53.500 "zcopy": true, 00:36:53.500 "get_zone_info": false, 00:36:53.500 "zone_management": false, 00:36:53.500 "zone_append": false, 00:36:53.500 "compare": false, 00:36:53.500 "compare_and_write": false, 00:36:53.500 "abort": true, 00:36:53.500 "seek_hole": false, 00:36:53.500 "seek_data": false, 00:36:53.500 "copy": true, 00:36:53.500 "nvme_iov_md": false 00:36:53.500 }, 00:36:53.500 "memory_domains": [ 00:36:53.500 { 00:36:53.500 "dma_device_id": "system", 00:36:53.500 "dma_device_type": 1 00:36:53.500 }, 00:36:53.500 { 00:36:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.500 "dma_device_type": 2 00:36:53.500 } 00:36:53.500 ], 00:36:53.500 "driver_specific": {} 00:36:53.500 } 00:36:53.500 ] 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.500 BaseBdev3 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:53.500 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.501 [ 00:36:53.501 { 00:36:53.501 "name": "BaseBdev3", 00:36:53.501 "aliases": [ 00:36:53.501 "1691a4a7-d50e-4146-b193-966c4721b035" 00:36:53.501 ], 00:36:53.501 "product_name": "Malloc disk", 00:36:53.501 "block_size": 512, 00:36:53.501 "num_blocks": 65536, 00:36:53.501 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:53.501 "assigned_rate_limits": { 00:36:53.501 "rw_ios_per_sec": 0, 00:36:53.501 "rw_mbytes_per_sec": 0, 00:36:53.501 "r_mbytes_per_sec": 0, 00:36:53.501 "w_mbytes_per_sec": 0 00:36:53.501 }, 00:36:53.501 "claimed": false, 00:36:53.501 "zoned": false, 00:36:53.501 "supported_io_types": { 00:36:53.501 "read": true, 00:36:53.501 "write": true, 00:36:53.501 "unmap": true, 00:36:53.501 "flush": true, 00:36:53.501 "reset": true, 00:36:53.501 "nvme_admin": false, 00:36:53.501 "nvme_io": false, 00:36:53.501 "nvme_io_md": false, 00:36:53.501 "write_zeroes": true, 00:36:53.501 "zcopy": true, 00:36:53.501 "get_zone_info": false, 00:36:53.501 "zone_management": false, 00:36:53.501 "zone_append": false, 00:36:53.501 "compare": false, 00:36:53.501 "compare_and_write": false, 00:36:53.501 "abort": true, 00:36:53.501 "seek_hole": false, 00:36:53.501 "seek_data": false, 00:36:53.501 "copy": true, 00:36:53.501 "nvme_iov_md": false 00:36:53.501 }, 00:36:53.501 "memory_domains": [ 00:36:53.501 { 00:36:53.501 "dma_device_id": "system", 00:36:53.501 "dma_device_type": 1 00:36:53.501 }, 00:36:53.501 { 00:36:53.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.501 "dma_device_type": 2 00:36:53.501 } 00:36:53.501 ], 00:36:53.501 "driver_specific": {} 00:36:53.501 } 00:36:53.501 ] 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.501 [2024-12-09 05:27:40.335371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:53.501 [2024-12-09 05:27:40.335423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:53.501 [2024-12-09 05:27:40.335471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:53.501 [2024-12-09 05:27:40.338115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:53.501 "name": "Existed_Raid", 00:36:53.501 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:53.501 "strip_size_kb": 64, 00:36:53.501 "state": "configuring", 00:36:53.501 "raid_level": "concat", 00:36:53.501 "superblock": true, 00:36:53.501 "num_base_bdevs": 3, 00:36:53.501 "num_base_bdevs_discovered": 2, 00:36:53.501 "num_base_bdevs_operational": 3, 00:36:53.501 "base_bdevs_list": [ 00:36:53.501 { 00:36:53.501 "name": "BaseBdev1", 00:36:53.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.501 "is_configured": false, 00:36:53.501 "data_offset": 0, 00:36:53.501 "data_size": 0 00:36:53.501 }, 00:36:53.501 { 00:36:53.501 "name": "BaseBdev2", 00:36:53.501 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:53.501 "is_configured": true, 00:36:53.501 "data_offset": 2048, 00:36:53.501 "data_size": 63488 00:36:53.501 }, 00:36:53.501 { 00:36:53.501 "name": "BaseBdev3", 00:36:53.501 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:53.501 "is_configured": true, 00:36:53.501 "data_offset": 2048, 00:36:53.501 "data_size": 63488 00:36:53.501 } 00:36:53.501 ] 00:36:53.501 }' 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:53.501 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.066 [2024-12-09 05:27:40.879505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.066 "name": "Existed_Raid", 00:36:54.066 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:54.066 "strip_size_kb": 64, 00:36:54.066 "state": "configuring", 00:36:54.066 "raid_level": "concat", 00:36:54.066 "superblock": true, 00:36:54.066 "num_base_bdevs": 3, 00:36:54.066 "num_base_bdevs_discovered": 1, 00:36:54.066 "num_base_bdevs_operational": 3, 00:36:54.066 "base_bdevs_list": [ 00:36:54.066 { 00:36:54.066 "name": "BaseBdev1", 00:36:54.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.066 "is_configured": false, 00:36:54.066 "data_offset": 0, 00:36:54.066 "data_size": 0 00:36:54.066 }, 00:36:54.066 { 00:36:54.066 "name": null, 00:36:54.066 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:54.066 "is_configured": false, 00:36:54.066 "data_offset": 0, 00:36:54.066 "data_size": 63488 00:36:54.066 }, 00:36:54.066 { 00:36:54.066 "name": "BaseBdev3", 00:36:54.066 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:54.066 "is_configured": true, 00:36:54.066 "data_offset": 2048, 00:36:54.066 "data_size": 63488 00:36:54.066 } 00:36:54.066 ] 00:36:54.066 }' 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.066 05:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:54.631 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.632 [2024-12-09 05:27:41.484311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:54.632 BaseBdev1 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.632 [ 00:36:54.632 { 00:36:54.632 "name": "BaseBdev1", 00:36:54.632 "aliases": [ 00:36:54.632 "fcdef8dc-58d4-45ae-8457-1561e6e77494" 00:36:54.632 ], 00:36:54.632 "product_name": "Malloc disk", 00:36:54.632 "block_size": 512, 00:36:54.632 "num_blocks": 65536, 00:36:54.632 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:54.632 "assigned_rate_limits": { 00:36:54.632 "rw_ios_per_sec": 0, 00:36:54.632 "rw_mbytes_per_sec": 0, 00:36:54.632 "r_mbytes_per_sec": 0, 00:36:54.632 "w_mbytes_per_sec": 0 00:36:54.632 }, 00:36:54.632 "claimed": true, 00:36:54.632 "claim_type": "exclusive_write", 00:36:54.632 "zoned": false, 00:36:54.632 "supported_io_types": { 00:36:54.632 "read": true, 00:36:54.632 "write": true, 00:36:54.632 "unmap": true, 00:36:54.632 "flush": true, 00:36:54.632 "reset": true, 00:36:54.632 "nvme_admin": false, 00:36:54.632 "nvme_io": false, 00:36:54.632 "nvme_io_md": false, 00:36:54.632 "write_zeroes": true, 00:36:54.632 "zcopy": true, 00:36:54.632 "get_zone_info": false, 00:36:54.632 "zone_management": false, 00:36:54.632 "zone_append": false, 00:36:54.632 "compare": false, 00:36:54.632 "compare_and_write": false, 00:36:54.632 "abort": true, 00:36:54.632 "seek_hole": false, 00:36:54.632 "seek_data": false, 00:36:54.632 "copy": true, 00:36:54.632 "nvme_iov_md": false 00:36:54.632 }, 00:36:54.632 "memory_domains": [ 00:36:54.632 { 00:36:54.632 "dma_device_id": "system", 00:36:54.632 "dma_device_type": 1 00:36:54.632 }, 00:36:54.632 { 00:36:54.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:54.632 "dma_device_type": 2 00:36:54.632 } 00:36:54.632 ], 00:36:54.632 "driver_specific": {} 00:36:54.632 } 00:36:54.632 ] 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.632 "name": "Existed_Raid", 00:36:54.632 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:54.632 "strip_size_kb": 64, 00:36:54.632 "state": "configuring", 00:36:54.632 "raid_level": "concat", 00:36:54.632 "superblock": true, 00:36:54.632 "num_base_bdevs": 3, 00:36:54.632 "num_base_bdevs_discovered": 2, 00:36:54.632 "num_base_bdevs_operational": 3, 00:36:54.632 "base_bdevs_list": [ 00:36:54.632 { 00:36:54.632 "name": "BaseBdev1", 00:36:54.632 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:54.632 "is_configured": true, 00:36:54.632 "data_offset": 2048, 00:36:54.632 "data_size": 63488 00:36:54.632 }, 00:36:54.632 { 00:36:54.632 "name": null, 00:36:54.632 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:54.632 "is_configured": false, 00:36:54.632 "data_offset": 0, 00:36:54.632 "data_size": 63488 00:36:54.632 }, 00:36:54.632 { 00:36:54.632 "name": "BaseBdev3", 00:36:54.632 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:54.632 "is_configured": true, 00:36:54.632 "data_offset": 2048, 00:36:54.632 "data_size": 63488 00:36:54.632 } 00:36:54.632 ] 00:36:54.632 }' 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.632 05:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.199 [2024-12-09 05:27:42.092444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:55.199 "name": "Existed_Raid", 00:36:55.199 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:55.199 "strip_size_kb": 64, 00:36:55.199 "state": "configuring", 00:36:55.199 "raid_level": "concat", 00:36:55.199 "superblock": true, 00:36:55.199 "num_base_bdevs": 3, 00:36:55.199 "num_base_bdevs_discovered": 1, 00:36:55.199 "num_base_bdevs_operational": 3, 00:36:55.199 "base_bdevs_list": [ 00:36:55.199 { 00:36:55.199 "name": "BaseBdev1", 00:36:55.199 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:55.199 "is_configured": true, 00:36:55.199 "data_offset": 2048, 00:36:55.199 "data_size": 63488 00:36:55.199 }, 00:36:55.199 { 00:36:55.199 "name": null, 00:36:55.199 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:55.199 "is_configured": false, 00:36:55.199 "data_offset": 0, 00:36:55.199 "data_size": 63488 00:36:55.199 }, 00:36:55.199 { 00:36:55.199 "name": null, 00:36:55.199 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:55.199 "is_configured": false, 00:36:55.199 "data_offset": 0, 00:36:55.199 "data_size": 63488 00:36:55.199 } 00:36:55.199 ] 00:36:55.199 }' 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:55.199 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.765 [2024-12-09 05:27:42.684631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:55.765 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.766 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.025 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:56.025 "name": "Existed_Raid", 00:36:56.025 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:56.025 "strip_size_kb": 64, 00:36:56.025 "state": "configuring", 00:36:56.025 "raid_level": "concat", 00:36:56.025 "superblock": true, 00:36:56.025 "num_base_bdevs": 3, 00:36:56.025 "num_base_bdevs_discovered": 2, 00:36:56.025 "num_base_bdevs_operational": 3, 00:36:56.025 "base_bdevs_list": [ 00:36:56.025 { 00:36:56.025 "name": "BaseBdev1", 00:36:56.025 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:56.025 "is_configured": true, 00:36:56.025 "data_offset": 2048, 00:36:56.025 "data_size": 63488 00:36:56.025 }, 00:36:56.025 { 00:36:56.025 "name": null, 00:36:56.025 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:56.025 "is_configured": false, 00:36:56.025 "data_offset": 0, 00:36:56.025 "data_size": 63488 00:36:56.025 }, 00:36:56.025 { 00:36:56.025 "name": "BaseBdev3", 00:36:56.025 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:56.025 "is_configured": true, 00:36:56.025 "data_offset": 2048, 00:36:56.025 "data_size": 63488 00:36:56.025 } 00:36:56.025 ] 00:36:56.025 }' 00:36:56.025 05:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:56.025 05:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:56.283 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:56.283 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.283 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:56.283 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:56.283 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:56.541 [2024-12-09 05:27:43.264830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:56.541 "name": "Existed_Raid", 00:36:56.541 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:56.541 "strip_size_kb": 64, 00:36:56.541 "state": "configuring", 00:36:56.541 "raid_level": "concat", 00:36:56.541 "superblock": true, 00:36:56.541 "num_base_bdevs": 3, 00:36:56.541 "num_base_bdevs_discovered": 1, 00:36:56.541 "num_base_bdevs_operational": 3, 00:36:56.541 "base_bdevs_list": [ 00:36:56.541 { 00:36:56.541 "name": null, 00:36:56.541 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:56.541 "is_configured": false, 00:36:56.541 "data_offset": 0, 00:36:56.541 "data_size": 63488 00:36:56.541 }, 00:36:56.541 { 00:36:56.541 "name": null, 00:36:56.541 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:56.541 "is_configured": false, 00:36:56.541 "data_offset": 0, 00:36:56.541 "data_size": 63488 00:36:56.541 }, 00:36:56.541 { 00:36:56.541 "name": "BaseBdev3", 00:36:56.541 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:56.541 "is_configured": true, 00:36:56.541 "data_offset": 2048, 00:36:56.541 "data_size": 63488 00:36:56.541 } 00:36:56.541 ] 00:36:56.541 }' 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:56.541 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.108 [2024-12-09 05:27:43.925156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:57.108 "name": "Existed_Raid", 00:36:57.108 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:57.108 "strip_size_kb": 64, 00:36:57.108 "state": "configuring", 00:36:57.108 "raid_level": "concat", 00:36:57.108 "superblock": true, 00:36:57.108 "num_base_bdevs": 3, 00:36:57.108 "num_base_bdevs_discovered": 2, 00:36:57.108 "num_base_bdevs_operational": 3, 00:36:57.108 "base_bdevs_list": [ 00:36:57.108 { 00:36:57.108 "name": null, 00:36:57.108 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:57.108 "is_configured": false, 00:36:57.108 "data_offset": 0, 00:36:57.108 "data_size": 63488 00:36:57.108 }, 00:36:57.108 { 00:36:57.108 "name": "BaseBdev2", 00:36:57.108 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:57.108 "is_configured": true, 00:36:57.108 "data_offset": 2048, 00:36:57.108 "data_size": 63488 00:36:57.108 }, 00:36:57.108 { 00:36:57.108 "name": "BaseBdev3", 00:36:57.108 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:57.108 "is_configured": true, 00:36:57.108 "data_offset": 2048, 00:36:57.108 "data_size": 63488 00:36:57.108 } 00:36:57.108 ] 00:36:57.108 }' 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:57.108 05:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fcdef8dc-58d4-45ae-8457-1561e6e77494 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 [2024-12-09 05:27:44.612113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:57.675 [2024-12-09 05:27:44.612393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:57.675 [2024-12-09 05:27:44.612426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:57.675 NewBaseBdev 00:36:57.675 [2024-12-09 05:27:44.612721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:57.675 [2024-12-09 05:27:44.612917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:57.675 [2024-12-09 05:27:44.612940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:57.675 [2024-12-09 05:27:44.613107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.675 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 [ 00:36:57.675 { 00:36:57.675 "name": "NewBaseBdev", 00:36:57.675 "aliases": [ 00:36:57.675 "fcdef8dc-58d4-45ae-8457-1561e6e77494" 00:36:57.675 ], 00:36:57.675 "product_name": "Malloc disk", 00:36:57.675 "block_size": 512, 00:36:57.675 "num_blocks": 65536, 00:36:57.675 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:57.675 "assigned_rate_limits": { 00:36:57.675 "rw_ios_per_sec": 0, 00:36:57.675 "rw_mbytes_per_sec": 0, 00:36:57.675 "r_mbytes_per_sec": 0, 00:36:57.675 "w_mbytes_per_sec": 0 00:36:57.675 }, 00:36:57.675 "claimed": true, 00:36:57.675 "claim_type": "exclusive_write", 00:36:57.675 "zoned": false, 00:36:57.675 "supported_io_types": { 00:36:57.675 "read": true, 00:36:57.675 "write": true, 00:36:57.675 "unmap": true, 00:36:57.675 "flush": true, 00:36:57.675 "reset": true, 00:36:57.675 "nvme_admin": false, 00:36:57.934 "nvme_io": false, 00:36:57.934 "nvme_io_md": false, 00:36:57.934 "write_zeroes": true, 00:36:57.934 "zcopy": true, 00:36:57.934 "get_zone_info": false, 00:36:57.934 "zone_management": false, 00:36:57.934 "zone_append": false, 00:36:57.934 "compare": false, 00:36:57.934 "compare_and_write": false, 00:36:57.934 "abort": true, 00:36:57.934 "seek_hole": false, 00:36:57.934 "seek_data": false, 00:36:57.934 "copy": true, 00:36:57.934 "nvme_iov_md": false 00:36:57.934 }, 00:36:57.934 "memory_domains": [ 00:36:57.934 { 00:36:57.934 "dma_device_id": "system", 00:36:57.934 "dma_device_type": 1 00:36:57.934 }, 00:36:57.934 { 00:36:57.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.934 "dma_device_type": 2 00:36:57.934 } 00:36:57.934 ], 00:36:57.934 "driver_specific": {} 00:36:57.934 } 00:36:57.934 ] 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.934 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:57.934 "name": "Existed_Raid", 00:36:57.934 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:57.935 "strip_size_kb": 64, 00:36:57.935 "state": "online", 00:36:57.935 "raid_level": "concat", 00:36:57.935 "superblock": true, 00:36:57.935 "num_base_bdevs": 3, 00:36:57.935 "num_base_bdevs_discovered": 3, 00:36:57.935 "num_base_bdevs_operational": 3, 00:36:57.935 "base_bdevs_list": [ 00:36:57.935 { 00:36:57.935 "name": "NewBaseBdev", 00:36:57.935 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:57.935 "is_configured": true, 00:36:57.935 "data_offset": 2048, 00:36:57.935 "data_size": 63488 00:36:57.935 }, 00:36:57.935 { 00:36:57.935 "name": "BaseBdev2", 00:36:57.935 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:57.935 "is_configured": true, 00:36:57.935 "data_offset": 2048, 00:36:57.935 "data_size": 63488 00:36:57.935 }, 00:36:57.935 { 00:36:57.935 "name": "BaseBdev3", 00:36:57.935 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:57.935 "is_configured": true, 00:36:57.935 "data_offset": 2048, 00:36:57.935 "data_size": 63488 00:36:57.935 } 00:36:57.935 ] 00:36:57.935 }' 00:36:57.935 05:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:57.935 05:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:58.502 [2024-12-09 05:27:45.188695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:58.502 "name": "Existed_Raid", 00:36:58.502 "aliases": [ 00:36:58.502 "e683730d-503c-4b5b-b3af-ba5ab63a57f6" 00:36:58.502 ], 00:36:58.502 "product_name": "Raid Volume", 00:36:58.502 "block_size": 512, 00:36:58.502 "num_blocks": 190464, 00:36:58.502 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:58.502 "assigned_rate_limits": { 00:36:58.502 "rw_ios_per_sec": 0, 00:36:58.502 "rw_mbytes_per_sec": 0, 00:36:58.502 "r_mbytes_per_sec": 0, 00:36:58.502 "w_mbytes_per_sec": 0 00:36:58.502 }, 00:36:58.502 "claimed": false, 00:36:58.502 "zoned": false, 00:36:58.502 "supported_io_types": { 00:36:58.502 "read": true, 00:36:58.502 "write": true, 00:36:58.502 "unmap": true, 00:36:58.502 "flush": true, 00:36:58.502 "reset": true, 00:36:58.502 "nvme_admin": false, 00:36:58.502 "nvme_io": false, 00:36:58.502 "nvme_io_md": false, 00:36:58.502 "write_zeroes": true, 00:36:58.502 "zcopy": false, 00:36:58.502 "get_zone_info": false, 00:36:58.502 "zone_management": false, 00:36:58.502 "zone_append": false, 00:36:58.502 "compare": false, 00:36:58.502 "compare_and_write": false, 00:36:58.502 "abort": false, 00:36:58.502 "seek_hole": false, 00:36:58.502 "seek_data": false, 00:36:58.502 "copy": false, 00:36:58.502 "nvme_iov_md": false 00:36:58.502 }, 00:36:58.502 "memory_domains": [ 00:36:58.502 { 00:36:58.502 "dma_device_id": "system", 00:36:58.502 "dma_device_type": 1 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.502 "dma_device_type": 2 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "dma_device_id": "system", 00:36:58.502 "dma_device_type": 1 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.502 "dma_device_type": 2 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "dma_device_id": "system", 00:36:58.502 "dma_device_type": 1 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.502 "dma_device_type": 2 00:36:58.502 } 00:36:58.502 ], 00:36:58.502 "driver_specific": { 00:36:58.502 "raid": { 00:36:58.502 "uuid": "e683730d-503c-4b5b-b3af-ba5ab63a57f6", 00:36:58.502 "strip_size_kb": 64, 00:36:58.502 "state": "online", 00:36:58.502 "raid_level": "concat", 00:36:58.502 "superblock": true, 00:36:58.502 "num_base_bdevs": 3, 00:36:58.502 "num_base_bdevs_discovered": 3, 00:36:58.502 "num_base_bdevs_operational": 3, 00:36:58.502 "base_bdevs_list": [ 00:36:58.502 { 00:36:58.502 "name": "NewBaseBdev", 00:36:58.502 "uuid": "fcdef8dc-58d4-45ae-8457-1561e6e77494", 00:36:58.502 "is_configured": true, 00:36:58.502 "data_offset": 2048, 00:36:58.502 "data_size": 63488 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "name": "BaseBdev2", 00:36:58.502 "uuid": "a0d83e54-c7e0-476b-9549-710995555f18", 00:36:58.502 "is_configured": true, 00:36:58.502 "data_offset": 2048, 00:36:58.502 "data_size": 63488 00:36:58.502 }, 00:36:58.502 { 00:36:58.502 "name": "BaseBdev3", 00:36:58.502 "uuid": "1691a4a7-d50e-4146-b193-966c4721b035", 00:36:58.502 "is_configured": true, 00:36:58.502 "data_offset": 2048, 00:36:58.502 "data_size": 63488 00:36:58.502 } 00:36:58.502 ] 00:36:58.502 } 00:36:58.502 } 00:36:58.502 }' 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:58.502 BaseBdev2 00:36:58.502 BaseBdev3' 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:58.502 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.503 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:58.761 [2024-12-09 05:27:45.516451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:58.761 [2024-12-09 05:27:45.516476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:58.761 [2024-12-09 05:27:45.516565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:58.761 [2024-12-09 05:27:45.516629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:58.761 [2024-12-09 05:27:45.516648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66313 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66313 ']' 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66313 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66313 00:36:58.761 killing process with pid 66313 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66313' 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66313 00:36:58.761 [2024-12-09 05:27:45.555724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:58.761 05:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66313 00:36:59.020 [2024-12-09 05:27:45.812748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:00.416 05:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:37:00.416 00:37:00.416 real 0m12.009s 00:37:00.416 user 0m19.837s 00:37:00.416 sys 0m1.709s 00:37:00.416 05:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.416 ************************************ 00:37:00.416 END TEST raid_state_function_test_sb 00:37:00.416 ************************************ 00:37:00.416 05:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:00.416 05:27:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:37:00.416 05:27:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:00.416 05:27:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.416 05:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:00.416 ************************************ 00:37:00.416 START TEST raid_superblock_test 00:37:00.416 ************************************ 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66945 00:37:00.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66945 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66945 ']' 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:00.416 05:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.416 [2024-12-09 05:27:47.116242] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:00.416 [2024-12-09 05:27:47.116611] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66945 ] 00:37:00.416 [2024-12-09 05:27:47.293524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.675 [2024-12-09 05:27:47.434246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.675 [2024-12-09 05:27:47.645622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:00.675 [2024-12-09 05:27:47.646019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.242 malloc1 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.242 [2024-12-09 05:27:48.180099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:01.242 [2024-12-09 05:27:48.180385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:01.242 [2024-12-09 05:27:48.180470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:01.242 [2024-12-09 05:27:48.180612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:01.242 [2024-12-09 05:27:48.183546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:01.242 [2024-12-09 05:27:48.183749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:01.242 pt1 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:37:01.242 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:01.243 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:01.243 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:01.243 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:01.243 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:37:01.243 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.243 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.522 malloc2 00:37:01.522 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.522 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:01.522 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.522 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.523 [2024-12-09 05:27:48.237017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:01.523 [2024-12-09 05:27:48.237092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:01.523 [2024-12-09 05:27:48.237127] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:01.523 [2024-12-09 05:27:48.237141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:01.523 [2024-12-09 05:27:48.239821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:01.523 [2024-12-09 05:27:48.239861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:01.523 pt2 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.523 malloc3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.523 [2024-12-09 05:27:48.303875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:01.523 [2024-12-09 05:27:48.303952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:01.523 [2024-12-09 05:27:48.303984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:01.523 [2024-12-09 05:27:48.303999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:01.523 [2024-12-09 05:27:48.306979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:01.523 [2024-12-09 05:27:48.307020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:01.523 pt3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.523 [2024-12-09 05:27:48.315921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:01.523 [2024-12-09 05:27:48.318389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:01.523 [2024-12-09 05:27:48.318494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:01.523 [2024-12-09 05:27:48.318677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:01.523 [2024-12-09 05:27:48.318699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:01.523 [2024-12-09 05:27:48.318996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:01.523 [2024-12-09 05:27:48.319227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:01.523 [2024-12-09 05:27:48.319250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:01.523 [2024-12-09 05:27:48.319417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:01.523 "name": "raid_bdev1", 00:37:01.523 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:01.523 "strip_size_kb": 64, 00:37:01.523 "state": "online", 00:37:01.523 "raid_level": "concat", 00:37:01.523 "superblock": true, 00:37:01.523 "num_base_bdevs": 3, 00:37:01.523 "num_base_bdevs_discovered": 3, 00:37:01.523 "num_base_bdevs_operational": 3, 00:37:01.523 "base_bdevs_list": [ 00:37:01.523 { 00:37:01.523 "name": "pt1", 00:37:01.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:01.523 "is_configured": true, 00:37:01.523 "data_offset": 2048, 00:37:01.523 "data_size": 63488 00:37:01.523 }, 00:37:01.523 { 00:37:01.523 "name": "pt2", 00:37:01.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:01.523 "is_configured": true, 00:37:01.523 "data_offset": 2048, 00:37:01.523 "data_size": 63488 00:37:01.523 }, 00:37:01.523 { 00:37:01.523 "name": "pt3", 00:37:01.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:01.523 "is_configured": true, 00:37:01.523 "data_offset": 2048, 00:37:01.523 "data_size": 63488 00:37:01.523 } 00:37:01.523 ] 00:37:01.523 }' 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:01.523 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.090 [2024-12-09 05:27:48.832417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:02.090 "name": "raid_bdev1", 00:37:02.090 "aliases": [ 00:37:02.090 "eab7a803-90c1-4bc2-a51c-f32b209e3fbc" 00:37:02.090 ], 00:37:02.090 "product_name": "Raid Volume", 00:37:02.090 "block_size": 512, 00:37:02.090 "num_blocks": 190464, 00:37:02.090 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:02.090 "assigned_rate_limits": { 00:37:02.090 "rw_ios_per_sec": 0, 00:37:02.090 "rw_mbytes_per_sec": 0, 00:37:02.090 "r_mbytes_per_sec": 0, 00:37:02.090 "w_mbytes_per_sec": 0 00:37:02.090 }, 00:37:02.090 "claimed": false, 00:37:02.090 "zoned": false, 00:37:02.090 "supported_io_types": { 00:37:02.090 "read": true, 00:37:02.090 "write": true, 00:37:02.090 "unmap": true, 00:37:02.090 "flush": true, 00:37:02.090 "reset": true, 00:37:02.090 "nvme_admin": false, 00:37:02.090 "nvme_io": false, 00:37:02.090 "nvme_io_md": false, 00:37:02.090 "write_zeroes": true, 00:37:02.090 "zcopy": false, 00:37:02.090 "get_zone_info": false, 00:37:02.090 "zone_management": false, 00:37:02.090 "zone_append": false, 00:37:02.090 "compare": false, 00:37:02.090 "compare_and_write": false, 00:37:02.090 "abort": false, 00:37:02.090 "seek_hole": false, 00:37:02.090 "seek_data": false, 00:37:02.090 "copy": false, 00:37:02.090 "nvme_iov_md": false 00:37:02.090 }, 00:37:02.090 "memory_domains": [ 00:37:02.090 { 00:37:02.090 "dma_device_id": "system", 00:37:02.090 "dma_device_type": 1 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.090 "dma_device_type": 2 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "dma_device_id": "system", 00:37:02.090 "dma_device_type": 1 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.090 "dma_device_type": 2 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "dma_device_id": "system", 00:37:02.090 "dma_device_type": 1 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.090 "dma_device_type": 2 00:37:02.090 } 00:37:02.090 ], 00:37:02.090 "driver_specific": { 00:37:02.090 "raid": { 00:37:02.090 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:02.090 "strip_size_kb": 64, 00:37:02.090 "state": "online", 00:37:02.090 "raid_level": "concat", 00:37:02.090 "superblock": true, 00:37:02.090 "num_base_bdevs": 3, 00:37:02.090 "num_base_bdevs_discovered": 3, 00:37:02.090 "num_base_bdevs_operational": 3, 00:37:02.090 "base_bdevs_list": [ 00:37:02.090 { 00:37:02.090 "name": "pt1", 00:37:02.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:02.090 "is_configured": true, 00:37:02.090 "data_offset": 2048, 00:37:02.090 "data_size": 63488 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "name": "pt2", 00:37:02.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:02.090 "is_configured": true, 00:37:02.090 "data_offset": 2048, 00:37:02.090 "data_size": 63488 00:37:02.090 }, 00:37:02.090 { 00:37:02.090 "name": "pt3", 00:37:02.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:02.090 "is_configured": true, 00:37:02.090 "data_offset": 2048, 00:37:02.090 "data_size": 63488 00:37:02.090 } 00:37:02.090 ] 00:37:02.090 } 00:37:02.090 } 00:37:02.090 }' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:02.090 pt2 00:37:02.090 pt3' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.090 05:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.090 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 [2024-12-09 05:27:49.156435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eab7a803-90c1-4bc2-a51c-f32b209e3fbc 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eab7a803-90c1-4bc2-a51c-f32b209e3fbc ']' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 [2024-12-09 05:27:49.220223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:02.349 [2024-12-09 05:27:49.220251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:02.349 [2024-12-09 05:27:49.220331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:02.349 [2024-12-09 05:27:49.220408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:02.349 [2024-12-09 05:27:49.220423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:02.608 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.608 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.609 [2024-12-09 05:27:49.368328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:02.609 [2024-12-09 05:27:49.371541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:02.609 [2024-12-09 05:27:49.371625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:02.609 [2024-12-09 05:27:49.371708] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:02.609 [2024-12-09 05:27:49.371804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:02.609 [2024-12-09 05:27:49.371840] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:37:02.609 [2024-12-09 05:27:49.371867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:02.609 [2024-12-09 05:27:49.371880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:37:02.609 request: 00:37:02.609 { 00:37:02.609 "name": "raid_bdev1", 00:37:02.609 "raid_level": "concat", 00:37:02.609 "base_bdevs": [ 00:37:02.609 "malloc1", 00:37:02.609 "malloc2", 00:37:02.609 "malloc3" 00:37:02.609 ], 00:37:02.609 "strip_size_kb": 64, 00:37:02.609 "superblock": false, 00:37:02.609 "method": "bdev_raid_create", 00:37:02.609 "req_id": 1 00:37:02.609 } 00:37:02.609 Got JSON-RPC error response 00:37:02.609 response: 00:37:02.609 { 00:37:02.609 "code": -17, 00:37:02.609 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:02.609 } 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.609 [2024-12-09 05:27:49.436402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:02.609 [2024-12-09 05:27:49.436621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:02.609 [2024-12-09 05:27:49.436671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:02.609 [2024-12-09 05:27:49.436687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:02.609 [2024-12-09 05:27:49.439728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:02.609 [2024-12-09 05:27:49.439954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:02.609 [2024-12-09 05:27:49.440066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:02.609 [2024-12-09 05:27:49.440147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:02.609 pt1 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:02.609 "name": "raid_bdev1", 00:37:02.609 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:02.609 "strip_size_kb": 64, 00:37:02.609 "state": "configuring", 00:37:02.609 "raid_level": "concat", 00:37:02.609 "superblock": true, 00:37:02.609 "num_base_bdevs": 3, 00:37:02.609 "num_base_bdevs_discovered": 1, 00:37:02.609 "num_base_bdevs_operational": 3, 00:37:02.609 "base_bdevs_list": [ 00:37:02.609 { 00:37:02.609 "name": "pt1", 00:37:02.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:02.609 "is_configured": true, 00:37:02.609 "data_offset": 2048, 00:37:02.609 "data_size": 63488 00:37:02.609 }, 00:37:02.609 { 00:37:02.609 "name": null, 00:37:02.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:02.609 "is_configured": false, 00:37:02.609 "data_offset": 2048, 00:37:02.609 "data_size": 63488 00:37:02.609 }, 00:37:02.609 { 00:37:02.609 "name": null, 00:37:02.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:02.609 "is_configured": false, 00:37:02.609 "data_offset": 2048, 00:37:02.609 "data_size": 63488 00:37:02.609 } 00:37:02.609 ] 00:37:02.609 }' 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:02.609 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.176 [2024-12-09 05:27:49.972584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:03.176 [2024-12-09 05:27:49.972880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:03.176 [2024-12-09 05:27:49.972934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:03.176 [2024-12-09 05:27:49.972951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:03.176 [2024-12-09 05:27:49.973569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:03.176 [2024-12-09 05:27:49.973609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:03.176 [2024-12-09 05:27:49.973813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:03.176 [2024-12-09 05:27:49.973872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:03.176 pt2 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.176 [2024-12-09 05:27:49.980557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.176 05:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.176 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.176 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:03.176 "name": "raid_bdev1", 00:37:03.176 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:03.176 "strip_size_kb": 64, 00:37:03.176 "state": "configuring", 00:37:03.176 "raid_level": "concat", 00:37:03.176 "superblock": true, 00:37:03.176 "num_base_bdevs": 3, 00:37:03.176 "num_base_bdevs_discovered": 1, 00:37:03.176 "num_base_bdevs_operational": 3, 00:37:03.176 "base_bdevs_list": [ 00:37:03.176 { 00:37:03.176 "name": "pt1", 00:37:03.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:03.176 "is_configured": true, 00:37:03.176 "data_offset": 2048, 00:37:03.176 "data_size": 63488 00:37:03.176 }, 00:37:03.176 { 00:37:03.176 "name": null, 00:37:03.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:03.176 "is_configured": false, 00:37:03.176 "data_offset": 0, 00:37:03.176 "data_size": 63488 00:37:03.176 }, 00:37:03.176 { 00:37:03.176 "name": null, 00:37:03.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:03.176 "is_configured": false, 00:37:03.176 "data_offset": 2048, 00:37:03.176 "data_size": 63488 00:37:03.176 } 00:37:03.176 ] 00:37:03.176 }' 00:37:03.176 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:03.176 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.743 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:37:03.743 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:03.743 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:03.743 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.743 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.743 [2024-12-09 05:27:50.516757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:03.743 [2024-12-09 05:27:50.516921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:03.744 [2024-12-09 05:27:50.516968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:03.744 [2024-12-09 05:27:50.516987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:03.744 [2024-12-09 05:27:50.517725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:03.744 [2024-12-09 05:27:50.517778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:03.744 [2024-12-09 05:27:50.517926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:03.744 [2024-12-09 05:27:50.518048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:03.744 pt2 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.744 [2024-12-09 05:27:50.524684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:03.744 [2024-12-09 05:27:50.524758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:03.744 [2024-12-09 05:27:50.524807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:03.744 [2024-12-09 05:27:50.524826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:03.744 [2024-12-09 05:27:50.525266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:03.744 [2024-12-09 05:27:50.525303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:03.744 [2024-12-09 05:27:50.525370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:03.744 [2024-12-09 05:27:50.525400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:03.744 [2024-12-09 05:27:50.525587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:03.744 [2024-12-09 05:27:50.525614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:03.744 [2024-12-09 05:27:50.526034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:03.744 [2024-12-09 05:27:50.526241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:03.744 [2024-12-09 05:27:50.526272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:37:03.744 [2024-12-09 05:27:50.526482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:03.744 pt3 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:03.744 "name": "raid_bdev1", 00:37:03.744 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:03.744 "strip_size_kb": 64, 00:37:03.744 "state": "online", 00:37:03.744 "raid_level": "concat", 00:37:03.744 "superblock": true, 00:37:03.744 "num_base_bdevs": 3, 00:37:03.744 "num_base_bdevs_discovered": 3, 00:37:03.744 "num_base_bdevs_operational": 3, 00:37:03.744 "base_bdevs_list": [ 00:37:03.744 { 00:37:03.744 "name": "pt1", 00:37:03.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:03.744 "is_configured": true, 00:37:03.744 "data_offset": 2048, 00:37:03.744 "data_size": 63488 00:37:03.744 }, 00:37:03.744 { 00:37:03.744 "name": "pt2", 00:37:03.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:03.744 "is_configured": true, 00:37:03.744 "data_offset": 2048, 00:37:03.744 "data_size": 63488 00:37:03.744 }, 00:37:03.744 { 00:37:03.744 "name": "pt3", 00:37:03.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:03.744 "is_configured": true, 00:37:03.744 "data_offset": 2048, 00:37:03.744 "data_size": 63488 00:37:03.744 } 00:37:03.744 ] 00:37:03.744 }' 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:03.744 05:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.311 [2024-12-09 05:27:51.069314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.311 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:04.311 "name": "raid_bdev1", 00:37:04.311 "aliases": [ 00:37:04.311 "eab7a803-90c1-4bc2-a51c-f32b209e3fbc" 00:37:04.311 ], 00:37:04.311 "product_name": "Raid Volume", 00:37:04.311 "block_size": 512, 00:37:04.311 "num_blocks": 190464, 00:37:04.311 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:04.311 "assigned_rate_limits": { 00:37:04.311 "rw_ios_per_sec": 0, 00:37:04.311 "rw_mbytes_per_sec": 0, 00:37:04.311 "r_mbytes_per_sec": 0, 00:37:04.311 "w_mbytes_per_sec": 0 00:37:04.311 }, 00:37:04.311 "claimed": false, 00:37:04.311 "zoned": false, 00:37:04.311 "supported_io_types": { 00:37:04.311 "read": true, 00:37:04.311 "write": true, 00:37:04.311 "unmap": true, 00:37:04.311 "flush": true, 00:37:04.311 "reset": true, 00:37:04.311 "nvme_admin": false, 00:37:04.311 "nvme_io": false, 00:37:04.311 "nvme_io_md": false, 00:37:04.311 "write_zeroes": true, 00:37:04.311 "zcopy": false, 00:37:04.311 "get_zone_info": false, 00:37:04.311 "zone_management": false, 00:37:04.311 "zone_append": false, 00:37:04.311 "compare": false, 00:37:04.311 "compare_and_write": false, 00:37:04.311 "abort": false, 00:37:04.311 "seek_hole": false, 00:37:04.311 "seek_data": false, 00:37:04.311 "copy": false, 00:37:04.311 "nvme_iov_md": false 00:37:04.311 }, 00:37:04.311 "memory_domains": [ 00:37:04.311 { 00:37:04.311 "dma_device_id": "system", 00:37:04.311 "dma_device_type": 1 00:37:04.311 }, 00:37:04.311 { 00:37:04.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.311 "dma_device_type": 2 00:37:04.311 }, 00:37:04.311 { 00:37:04.311 "dma_device_id": "system", 00:37:04.311 "dma_device_type": 1 00:37:04.311 }, 00:37:04.311 { 00:37:04.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.311 "dma_device_type": 2 00:37:04.311 }, 00:37:04.311 { 00:37:04.311 "dma_device_id": "system", 00:37:04.311 "dma_device_type": 1 00:37:04.311 }, 00:37:04.311 { 00:37:04.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.311 "dma_device_type": 2 00:37:04.311 } 00:37:04.311 ], 00:37:04.311 "driver_specific": { 00:37:04.311 "raid": { 00:37:04.311 "uuid": "eab7a803-90c1-4bc2-a51c-f32b209e3fbc", 00:37:04.311 "strip_size_kb": 64, 00:37:04.312 "state": "online", 00:37:04.312 "raid_level": "concat", 00:37:04.312 "superblock": true, 00:37:04.312 "num_base_bdevs": 3, 00:37:04.312 "num_base_bdevs_discovered": 3, 00:37:04.312 "num_base_bdevs_operational": 3, 00:37:04.312 "base_bdevs_list": [ 00:37:04.312 { 00:37:04.312 "name": "pt1", 00:37:04.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:04.312 "is_configured": true, 00:37:04.312 "data_offset": 2048, 00:37:04.312 "data_size": 63488 00:37:04.312 }, 00:37:04.312 { 00:37:04.312 "name": "pt2", 00:37:04.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:04.312 "is_configured": true, 00:37:04.312 "data_offset": 2048, 00:37:04.312 "data_size": 63488 00:37:04.312 }, 00:37:04.312 { 00:37:04.312 "name": "pt3", 00:37:04.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:04.312 "is_configured": true, 00:37:04.312 "data_offset": 2048, 00:37:04.312 "data_size": 63488 00:37:04.312 } 00:37:04.312 ] 00:37:04.312 } 00:37:04.312 } 00:37:04.312 }' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:04.312 pt2 00:37:04.312 pt3' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.312 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.571 [2024-12-09 05:27:51.393299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eab7a803-90c1-4bc2-a51c-f32b209e3fbc '!=' eab7a803-90c1-4bc2-a51c-f32b209e3fbc ']' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66945 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66945 ']' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66945 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66945 00:37:04.571 killing process with pid 66945 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66945' 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66945 00:37:04.571 [2024-12-09 05:27:51.471255] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:04.571 05:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66945 00:37:04.571 [2024-12-09 05:27:51.471354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:04.571 [2024-12-09 05:27:51.471431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:04.571 [2024-12-09 05:27:51.471450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:37:04.829 [2024-12-09 05:27:51.727816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:06.206 05:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:37:06.206 00:37:06.206 real 0m5.815s 00:37:06.206 user 0m8.687s 00:37:06.206 sys 0m0.928s 00:37:06.206 05:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:06.206 05:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:06.206 ************************************ 00:37:06.206 END TEST raid_superblock_test 00:37:06.206 ************************************ 00:37:06.206 05:27:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:37:06.206 05:27:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:06.206 05:27:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:06.206 05:27:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:06.206 ************************************ 00:37:06.206 START TEST raid_read_error_test 00:37:06.206 ************************************ 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nGFjQWhvJA 00:37:06.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67204 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67204 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67204 ']' 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.206 05:27:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:06.206 [2024-12-09 05:27:53.023452] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:06.206 [2024-12-09 05:27:53.023966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67204 ] 00:37:06.465 [2024-12-09 05:27:53.211046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.465 [2024-12-09 05:27:53.348531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.745 [2024-12-09 05:27:53.552540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:06.745 [2024-12-09 05:27:53.552831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:07.315 05:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.315 05:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:37:07.315 05:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:07.315 05:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:07.315 05:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 BaseBdev1_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 true 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 [2024-12-09 05:27:54.043386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:37:07.315 [2024-12-09 05:27:54.043474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:07.315 [2024-12-09 05:27:54.043503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:07.315 [2024-12-09 05:27:54.043519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:07.315 [2024-12-09 05:27:54.046274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:07.315 [2024-12-09 05:27:54.046384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:07.315 BaseBdev1 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 BaseBdev2_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 true 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 [2024-12-09 05:27:54.098969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:37:07.315 [2024-12-09 05:27:54.099051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:07.315 [2024-12-09 05:27:54.099076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:07.315 [2024-12-09 05:27:54.099092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:07.315 [2024-12-09 05:27:54.101930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:07.315 [2024-12-09 05:27:54.102001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:07.315 BaseBdev2 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 BaseBdev3_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 true 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 [2024-12-09 05:27:54.169265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:37:07.315 [2024-12-09 05:27:54.169344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:07.315 [2024-12-09 05:27:54.169370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:07.315 [2024-12-09 05:27:54.169386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:07.315 [2024-12-09 05:27:54.172237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:07.315 [2024-12-09 05:27:54.172305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:07.315 BaseBdev3 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 [2024-12-09 05:27:54.181362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:07.315 [2024-12-09 05:27:54.183826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:07.315 [2024-12-09 05:27:54.183923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:07.315 [2024-12-09 05:27:54.184170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:07.315 [2024-12-09 05:27:54.184188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:07.315 [2024-12-09 05:27:54.184461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:37:07.315 [2024-12-09 05:27:54.184660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:07.315 [2024-12-09 05:27:54.184682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:07.315 [2024-12-09 05:27:54.184860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.315 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.315 "name": "raid_bdev1", 00:37:07.315 "uuid": "234ce258-c19a-4053-912e-3aa2a29f2609", 00:37:07.315 "strip_size_kb": 64, 00:37:07.315 "state": "online", 00:37:07.315 "raid_level": "concat", 00:37:07.315 "superblock": true, 00:37:07.315 "num_base_bdevs": 3, 00:37:07.315 "num_base_bdevs_discovered": 3, 00:37:07.315 "num_base_bdevs_operational": 3, 00:37:07.315 "base_bdevs_list": [ 00:37:07.315 { 00:37:07.315 "name": "BaseBdev1", 00:37:07.315 "uuid": "d1ad695e-f350-58b1-99f0-c91e37f8716c", 00:37:07.315 "is_configured": true, 00:37:07.315 "data_offset": 2048, 00:37:07.315 "data_size": 63488 00:37:07.315 }, 00:37:07.315 { 00:37:07.315 "name": "BaseBdev2", 00:37:07.315 "uuid": "754e2056-d940-5ebf-8821-0b1f111b1472", 00:37:07.315 "is_configured": true, 00:37:07.316 "data_offset": 2048, 00:37:07.316 "data_size": 63488 00:37:07.316 }, 00:37:07.316 { 00:37:07.316 "name": "BaseBdev3", 00:37:07.316 "uuid": "7b7a76d4-0424-53fe-bd12-c92c64d8df3f", 00:37:07.316 "is_configured": true, 00:37:07.316 "data_offset": 2048, 00:37:07.316 "data_size": 63488 00:37:07.316 } 00:37:07.316 ] 00:37:07.316 }' 00:37:07.316 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.316 05:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.891 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:37:07.891 05:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:07.891 [2024-12-09 05:27:54.834960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:08.850 "name": "raid_bdev1", 00:37:08.850 "uuid": "234ce258-c19a-4053-912e-3aa2a29f2609", 00:37:08.850 "strip_size_kb": 64, 00:37:08.850 "state": "online", 00:37:08.850 "raid_level": "concat", 00:37:08.850 "superblock": true, 00:37:08.850 "num_base_bdevs": 3, 00:37:08.850 "num_base_bdevs_discovered": 3, 00:37:08.850 "num_base_bdevs_operational": 3, 00:37:08.850 "base_bdevs_list": [ 00:37:08.850 { 00:37:08.850 "name": "BaseBdev1", 00:37:08.850 "uuid": "d1ad695e-f350-58b1-99f0-c91e37f8716c", 00:37:08.850 "is_configured": true, 00:37:08.850 "data_offset": 2048, 00:37:08.850 "data_size": 63488 00:37:08.850 }, 00:37:08.850 { 00:37:08.850 "name": "BaseBdev2", 00:37:08.850 "uuid": "754e2056-d940-5ebf-8821-0b1f111b1472", 00:37:08.850 "is_configured": true, 00:37:08.850 "data_offset": 2048, 00:37:08.850 "data_size": 63488 00:37:08.850 }, 00:37:08.850 { 00:37:08.850 "name": "BaseBdev3", 00:37:08.850 "uuid": "7b7a76d4-0424-53fe-bd12-c92c64d8df3f", 00:37:08.850 "is_configured": true, 00:37:08.850 "data_offset": 2048, 00:37:08.850 "data_size": 63488 00:37:08.850 } 00:37:08.850 ] 00:37:08.850 }' 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:08.850 05:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:09.417 [2024-12-09 05:27:56.250439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:09.417 [2024-12-09 05:27:56.250638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:09.417 [2024-12-09 05:27:56.254397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:09.417 [2024-12-09 05:27:56.254524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:09.417 [2024-12-09 05:27:56.254585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:09.417 [2024-12-09 05:27:56.254601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:09.417 { 00:37:09.417 "results": [ 00:37:09.417 { 00:37:09.417 "job": "raid_bdev1", 00:37:09.417 "core_mask": "0x1", 00:37:09.417 "workload": "randrw", 00:37:09.417 "percentage": 50, 00:37:09.417 "status": "finished", 00:37:09.417 "queue_depth": 1, 00:37:09.417 "io_size": 131072, 00:37:09.417 "runtime": 1.413629, 00:37:09.417 "iops": 10521.855451465695, 00:37:09.417 "mibps": 1315.231931433212, 00:37:09.417 "io_failed": 1, 00:37:09.417 "io_timeout": 0, 00:37:09.417 "avg_latency_us": 132.9219901604278, 00:37:09.417 "min_latency_us": 36.77090909090909, 00:37:09.417 "max_latency_us": 1936.290909090909 00:37:09.417 } 00:37:09.417 ], 00:37:09.417 "core_count": 1 00:37:09.417 } 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67204 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67204 ']' 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67204 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67204 00:37:09.417 killing process with pid 67204 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67204' 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67204 00:37:09.417 [2024-12-09 05:27:56.294441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:09.417 05:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67204 00:37:09.675 [2024-12-09 05:27:56.495205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:11.055 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nGFjQWhvJA 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:37:11.056 00:37:11.056 real 0m4.763s 00:37:11.056 user 0m5.852s 00:37:11.056 sys 0m0.639s 00:37:11.056 ************************************ 00:37:11.056 END TEST raid_read_error_test 00:37:11.056 ************************************ 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.056 05:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 05:27:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:37:11.056 05:27:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:11.056 05:27:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.056 05:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 ************************************ 00:37:11.056 START TEST raid_write_error_test 00:37:11.056 ************************************ 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hm0RvB4Zpg 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67350 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67350 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67350 ']' 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.056 05:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 [2024-12-09 05:27:57.823854] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:11.056 [2024-12-09 05:27:57.824049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67350 ] 00:37:11.056 [2024-12-09 05:27:57.996052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.322 [2024-12-09 05:27:58.136100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.579 [2024-12-09 05:27:58.343735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:11.579 [2024-12-09 05:27:58.343785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 BaseBdev1_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 true 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 [2024-12-09 05:27:58.905112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:37:12.146 [2024-12-09 05:27:58.905214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.146 [2024-12-09 05:27:58.905245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:12.146 [2024-12-09 05:27:58.905262] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.146 [2024-12-09 05:27:58.908346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.146 [2024-12-09 05:27:58.908410] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:12.146 BaseBdev1 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 BaseBdev2_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 true 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 [2024-12-09 05:27:58.969828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:37:12.146 [2024-12-09 05:27:58.969923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.146 [2024-12-09 05:27:58.969952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:12.146 [2024-12-09 05:27:58.969969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.146 [2024-12-09 05:27:58.972914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.146 [2024-12-09 05:27:58.972967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:12.146 BaseBdev2 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 BaseBdev3_malloc 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 true 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 [2024-12-09 05:27:59.041768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:37:12.146 [2024-12-09 05:27:59.041864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.146 [2024-12-09 05:27:59.041908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:12.146 [2024-12-09 05:27:59.041926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.146 [2024-12-09 05:27:59.044729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.146 [2024-12-09 05:27:59.044811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:12.146 BaseBdev3 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 [2024-12-09 05:27:59.049903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:12.146 [2024-12-09 05:27:59.052474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:12.146 [2024-12-09 05:27:59.052580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:12.146 [2024-12-09 05:27:59.052905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:12.146 [2024-12-09 05:27:59.052925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:12.146 [2024-12-09 05:27:59.053217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:37:12.146 [2024-12-09 05:27:59.053501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:12.146 [2024-12-09 05:27:59.053529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:12.146 [2024-12-09 05:27:59.053696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:12.146 "name": "raid_bdev1", 00:37:12.146 "uuid": "76bb048e-eda3-4ec1-9227-2da72a2da2cf", 00:37:12.146 "strip_size_kb": 64, 00:37:12.146 "state": "online", 00:37:12.146 "raid_level": "concat", 00:37:12.146 "superblock": true, 00:37:12.146 "num_base_bdevs": 3, 00:37:12.146 "num_base_bdevs_discovered": 3, 00:37:12.146 "num_base_bdevs_operational": 3, 00:37:12.146 "base_bdevs_list": [ 00:37:12.146 { 00:37:12.146 "name": "BaseBdev1", 00:37:12.146 "uuid": "55e0e7ea-34f3-5fa6-a81a-e1ee9fc6f4e7", 00:37:12.146 "is_configured": true, 00:37:12.146 "data_offset": 2048, 00:37:12.146 "data_size": 63488 00:37:12.146 }, 00:37:12.146 { 00:37:12.146 "name": "BaseBdev2", 00:37:12.146 "uuid": "09885c9b-1284-5d19-804a-dd9781bc05e9", 00:37:12.146 "is_configured": true, 00:37:12.146 "data_offset": 2048, 00:37:12.146 "data_size": 63488 00:37:12.146 }, 00:37:12.146 { 00:37:12.146 "name": "BaseBdev3", 00:37:12.146 "uuid": "e7666f0d-434a-5bb9-980b-e458ae8b8d90", 00:37:12.146 "is_configured": true, 00:37:12.146 "data_offset": 2048, 00:37:12.146 "data_size": 63488 00:37:12.146 } 00:37:12.146 ] 00:37:12.146 }' 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:12.146 05:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.713 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:12.713 05:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:37:12.971 [2024-12-09 05:27:59.703520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:13.909 "name": "raid_bdev1", 00:37:13.909 "uuid": "76bb048e-eda3-4ec1-9227-2da72a2da2cf", 00:37:13.909 "strip_size_kb": 64, 00:37:13.909 "state": "online", 00:37:13.909 "raid_level": "concat", 00:37:13.909 "superblock": true, 00:37:13.909 "num_base_bdevs": 3, 00:37:13.909 "num_base_bdevs_discovered": 3, 00:37:13.909 "num_base_bdevs_operational": 3, 00:37:13.909 "base_bdevs_list": [ 00:37:13.909 { 00:37:13.909 "name": "BaseBdev1", 00:37:13.909 "uuid": "55e0e7ea-34f3-5fa6-a81a-e1ee9fc6f4e7", 00:37:13.909 "is_configured": true, 00:37:13.909 "data_offset": 2048, 00:37:13.909 "data_size": 63488 00:37:13.909 }, 00:37:13.909 { 00:37:13.909 "name": "BaseBdev2", 00:37:13.909 "uuid": "09885c9b-1284-5d19-804a-dd9781bc05e9", 00:37:13.909 "is_configured": true, 00:37:13.909 "data_offset": 2048, 00:37:13.909 "data_size": 63488 00:37:13.909 }, 00:37:13.909 { 00:37:13.909 "name": "BaseBdev3", 00:37:13.909 "uuid": "e7666f0d-434a-5bb9-980b-e458ae8b8d90", 00:37:13.909 "is_configured": true, 00:37:13.909 "data_offset": 2048, 00:37:13.909 "data_size": 63488 00:37:13.909 } 00:37:13.909 ] 00:37:13.909 }' 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:13.909 05:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:14.168 05:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:14.168 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.168 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:14.168 [2024-12-09 05:28:01.133235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:14.168 [2024-12-09 05:28:01.133280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:14.168 [2024-12-09 05:28:01.136689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:14.168 [2024-12-09 05:28:01.136743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:14.168 [2024-12-09 05:28:01.136809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:14.169 [2024-12-09 05:28:01.136826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:14.169 { 00:37:14.169 "results": [ 00:37:14.169 { 00:37:14.169 "job": "raid_bdev1", 00:37:14.169 "core_mask": "0x1", 00:37:14.169 "workload": "randrw", 00:37:14.169 "percentage": 50, 00:37:14.169 "status": "finished", 00:37:14.169 "queue_depth": 1, 00:37:14.169 "io_size": 131072, 00:37:14.169 "runtime": 1.427358, 00:37:14.169 "iops": 10472.495337539705, 00:37:14.169 "mibps": 1309.0619171924632, 00:37:14.169 "io_failed": 1, 00:37:14.169 "io_timeout": 0, 00:37:14.169 "avg_latency_us": 133.86714343920846, 00:37:14.169 "min_latency_us": 36.07272727272727, 00:37:14.169 "max_latency_us": 2115.0254545454545 00:37:14.169 } 00:37:14.169 ], 00:37:14.169 "core_count": 1 00:37:14.169 } 00:37:14.169 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.169 05:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67350 00:37:14.169 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67350 ']' 00:37:14.169 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67350 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67350 00:37:14.426 killing process with pid 67350 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67350' 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67350 00:37:14.426 [2024-12-09 05:28:01.174681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:14.426 05:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67350 00:37:14.426 [2024-12-09 05:28:01.368150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:15.799 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hm0RvB4Zpg 00:37:15.799 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:37:15.799 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:37:15.799 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:37:15.799 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:37:15.799 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:15.800 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:37:15.800 05:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:37:15.800 00:37:15.800 real 0m4.851s 00:37:15.800 user 0m5.960s 00:37:15.800 sys 0m0.640s 00:37:15.800 05:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.800 ************************************ 00:37:15.800 END TEST raid_write_error_test 00:37:15.800 ************************************ 00:37:15.800 05:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:15.800 05:28:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:37:15.800 05:28:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:37:15.800 05:28:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:15.800 05:28:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.800 05:28:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:15.800 ************************************ 00:37:15.800 START TEST raid_state_function_test 00:37:15.800 ************************************ 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67502 00:37:15.800 Process raid pid: 67502 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67502' 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67502 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67502 ']' 00:37:15.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.800 05:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:15.800 [2024-12-09 05:28:02.748408] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:15.800 [2024-12-09 05:28:02.748857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.057 [2024-12-09 05:28:02.938481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.315 [2024-12-09 05:28:03.071296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.315 [2024-12-09 05:28:03.270476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:16.315 [2024-12-09 05:28:03.270538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.903 [2024-12-09 05:28:03.706864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:16.903 [2024-12-09 05:28:03.707001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:16.903 [2024-12-09 05:28:03.707019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:16.903 [2024-12-09 05:28:03.707035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:16.903 [2024-12-09 05:28:03.707045] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:16.903 [2024-12-09 05:28:03.707059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:16.903 "name": "Existed_Raid", 00:37:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.903 "strip_size_kb": 0, 00:37:16.903 "state": "configuring", 00:37:16.903 "raid_level": "raid1", 00:37:16.903 "superblock": false, 00:37:16.903 "num_base_bdevs": 3, 00:37:16.903 "num_base_bdevs_discovered": 0, 00:37:16.903 "num_base_bdevs_operational": 3, 00:37:16.903 "base_bdevs_list": [ 00:37:16.903 { 00:37:16.903 "name": "BaseBdev1", 00:37:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.903 "is_configured": false, 00:37:16.903 "data_offset": 0, 00:37:16.903 "data_size": 0 00:37:16.903 }, 00:37:16.903 { 00:37:16.903 "name": "BaseBdev2", 00:37:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.903 "is_configured": false, 00:37:16.903 "data_offset": 0, 00:37:16.903 "data_size": 0 00:37:16.903 }, 00:37:16.903 { 00:37:16.903 "name": "BaseBdev3", 00:37:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.903 "is_configured": false, 00:37:16.903 "data_offset": 0, 00:37:16.903 "data_size": 0 00:37:16.903 } 00:37:16.903 ] 00:37:16.903 }' 00:37:16.903 05:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:16.904 05:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.471 [2024-12-09 05:28:04.250950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:17.471 [2024-12-09 05:28:04.250990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.471 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.471 [2024-12-09 05:28:04.258943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:17.471 [2024-12-09 05:28:04.259010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:17.471 [2024-12-09 05:28:04.259025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:17.471 [2024-12-09 05:28:04.259040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:17.471 [2024-12-09 05:28:04.259049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:17.472 [2024-12-09 05:28:04.259062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.472 [2024-12-09 05:28:04.303952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:17.472 BaseBdev1 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.472 [ 00:37:17.472 { 00:37:17.472 "name": "BaseBdev1", 00:37:17.472 "aliases": [ 00:37:17.472 "871e4e1a-52b9-475f-b594-1fd586052dd7" 00:37:17.472 ], 00:37:17.472 "product_name": "Malloc disk", 00:37:17.472 "block_size": 512, 00:37:17.472 "num_blocks": 65536, 00:37:17.472 "uuid": "871e4e1a-52b9-475f-b594-1fd586052dd7", 00:37:17.472 "assigned_rate_limits": { 00:37:17.472 "rw_ios_per_sec": 0, 00:37:17.472 "rw_mbytes_per_sec": 0, 00:37:17.472 "r_mbytes_per_sec": 0, 00:37:17.472 "w_mbytes_per_sec": 0 00:37:17.472 }, 00:37:17.472 "claimed": true, 00:37:17.472 "claim_type": "exclusive_write", 00:37:17.472 "zoned": false, 00:37:17.472 "supported_io_types": { 00:37:17.472 "read": true, 00:37:17.472 "write": true, 00:37:17.472 "unmap": true, 00:37:17.472 "flush": true, 00:37:17.472 "reset": true, 00:37:17.472 "nvme_admin": false, 00:37:17.472 "nvme_io": false, 00:37:17.472 "nvme_io_md": false, 00:37:17.472 "write_zeroes": true, 00:37:17.472 "zcopy": true, 00:37:17.472 "get_zone_info": false, 00:37:17.472 "zone_management": false, 00:37:17.472 "zone_append": false, 00:37:17.472 "compare": false, 00:37:17.472 "compare_and_write": false, 00:37:17.472 "abort": true, 00:37:17.472 "seek_hole": false, 00:37:17.472 "seek_data": false, 00:37:17.472 "copy": true, 00:37:17.472 "nvme_iov_md": false 00:37:17.472 }, 00:37:17.472 "memory_domains": [ 00:37:17.472 { 00:37:17.472 "dma_device_id": "system", 00:37:17.472 "dma_device_type": 1 00:37:17.472 }, 00:37:17.472 { 00:37:17.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.472 "dma_device_type": 2 00:37:17.472 } 00:37:17.472 ], 00:37:17.472 "driver_specific": {} 00:37:17.472 } 00:37:17.472 ] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.472 "name": "Existed_Raid", 00:37:17.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.472 "strip_size_kb": 0, 00:37:17.472 "state": "configuring", 00:37:17.472 "raid_level": "raid1", 00:37:17.472 "superblock": false, 00:37:17.472 "num_base_bdevs": 3, 00:37:17.472 "num_base_bdevs_discovered": 1, 00:37:17.472 "num_base_bdevs_operational": 3, 00:37:17.472 "base_bdevs_list": [ 00:37:17.472 { 00:37:17.472 "name": "BaseBdev1", 00:37:17.472 "uuid": "871e4e1a-52b9-475f-b594-1fd586052dd7", 00:37:17.472 "is_configured": true, 00:37:17.472 "data_offset": 0, 00:37:17.472 "data_size": 65536 00:37:17.472 }, 00:37:17.472 { 00:37:17.472 "name": "BaseBdev2", 00:37:17.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.472 "is_configured": false, 00:37:17.472 "data_offset": 0, 00:37:17.472 "data_size": 0 00:37:17.472 }, 00:37:17.472 { 00:37:17.472 "name": "BaseBdev3", 00:37:17.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.472 "is_configured": false, 00:37:17.472 "data_offset": 0, 00:37:17.472 "data_size": 0 00:37:17.472 } 00:37:17.472 ] 00:37:17.472 }' 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.472 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.039 [2024-12-09 05:28:04.860168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:18.039 [2024-12-09 05:28:04.860456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.039 [2024-12-09 05:28:04.868197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:18.039 [2024-12-09 05:28:04.870679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:18.039 [2024-12-09 05:28:04.870749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:18.039 [2024-12-09 05:28:04.870765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:18.039 [2024-12-09 05:28:04.870812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.039 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.040 "name": "Existed_Raid", 00:37:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.040 "strip_size_kb": 0, 00:37:18.040 "state": "configuring", 00:37:18.040 "raid_level": "raid1", 00:37:18.040 "superblock": false, 00:37:18.040 "num_base_bdevs": 3, 00:37:18.040 "num_base_bdevs_discovered": 1, 00:37:18.040 "num_base_bdevs_operational": 3, 00:37:18.040 "base_bdevs_list": [ 00:37:18.040 { 00:37:18.040 "name": "BaseBdev1", 00:37:18.040 "uuid": "871e4e1a-52b9-475f-b594-1fd586052dd7", 00:37:18.040 "is_configured": true, 00:37:18.040 "data_offset": 0, 00:37:18.040 "data_size": 65536 00:37:18.040 }, 00:37:18.040 { 00:37:18.040 "name": "BaseBdev2", 00:37:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.040 "is_configured": false, 00:37:18.040 "data_offset": 0, 00:37:18.040 "data_size": 0 00:37:18.040 }, 00:37:18.040 { 00:37:18.040 "name": "BaseBdev3", 00:37:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.040 "is_configured": false, 00:37:18.040 "data_offset": 0, 00:37:18.040 "data_size": 0 00:37:18.040 } 00:37:18.040 ] 00:37:18.040 }' 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.040 05:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.605 [2024-12-09 05:28:05.438894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:18.605 BaseBdev2 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.605 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.605 [ 00:37:18.605 { 00:37:18.605 "name": "BaseBdev2", 00:37:18.605 "aliases": [ 00:37:18.605 "85d459e1-e224-4942-9348-e7d6d90b1c90" 00:37:18.605 ], 00:37:18.606 "product_name": "Malloc disk", 00:37:18.606 "block_size": 512, 00:37:18.606 "num_blocks": 65536, 00:37:18.606 "uuid": "85d459e1-e224-4942-9348-e7d6d90b1c90", 00:37:18.606 "assigned_rate_limits": { 00:37:18.606 "rw_ios_per_sec": 0, 00:37:18.606 "rw_mbytes_per_sec": 0, 00:37:18.606 "r_mbytes_per_sec": 0, 00:37:18.606 "w_mbytes_per_sec": 0 00:37:18.606 }, 00:37:18.606 "claimed": true, 00:37:18.606 "claim_type": "exclusive_write", 00:37:18.606 "zoned": false, 00:37:18.606 "supported_io_types": { 00:37:18.606 "read": true, 00:37:18.606 "write": true, 00:37:18.606 "unmap": true, 00:37:18.606 "flush": true, 00:37:18.606 "reset": true, 00:37:18.606 "nvme_admin": false, 00:37:18.606 "nvme_io": false, 00:37:18.606 "nvme_io_md": false, 00:37:18.606 "write_zeroes": true, 00:37:18.606 "zcopy": true, 00:37:18.606 "get_zone_info": false, 00:37:18.606 "zone_management": false, 00:37:18.606 "zone_append": false, 00:37:18.606 "compare": false, 00:37:18.606 "compare_and_write": false, 00:37:18.606 "abort": true, 00:37:18.606 "seek_hole": false, 00:37:18.606 "seek_data": false, 00:37:18.606 "copy": true, 00:37:18.606 "nvme_iov_md": false 00:37:18.606 }, 00:37:18.606 "memory_domains": [ 00:37:18.606 { 00:37:18.606 "dma_device_id": "system", 00:37:18.606 "dma_device_type": 1 00:37:18.606 }, 00:37:18.606 { 00:37:18.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:18.606 "dma_device_type": 2 00:37:18.606 } 00:37:18.606 ], 00:37:18.606 "driver_specific": {} 00:37:18.606 } 00:37:18.606 ] 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.606 "name": "Existed_Raid", 00:37:18.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.606 "strip_size_kb": 0, 00:37:18.606 "state": "configuring", 00:37:18.606 "raid_level": "raid1", 00:37:18.606 "superblock": false, 00:37:18.606 "num_base_bdevs": 3, 00:37:18.606 "num_base_bdevs_discovered": 2, 00:37:18.606 "num_base_bdevs_operational": 3, 00:37:18.606 "base_bdevs_list": [ 00:37:18.606 { 00:37:18.606 "name": "BaseBdev1", 00:37:18.606 "uuid": "871e4e1a-52b9-475f-b594-1fd586052dd7", 00:37:18.606 "is_configured": true, 00:37:18.606 "data_offset": 0, 00:37:18.606 "data_size": 65536 00:37:18.606 }, 00:37:18.606 { 00:37:18.606 "name": "BaseBdev2", 00:37:18.606 "uuid": "85d459e1-e224-4942-9348-e7d6d90b1c90", 00:37:18.606 "is_configured": true, 00:37:18.606 "data_offset": 0, 00:37:18.606 "data_size": 65536 00:37:18.606 }, 00:37:18.606 { 00:37:18.606 "name": "BaseBdev3", 00:37:18.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.606 "is_configured": false, 00:37:18.606 "data_offset": 0, 00:37:18.606 "data_size": 0 00:37:18.606 } 00:37:18.606 ] 00:37:18.606 }' 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.606 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.172 05:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:19.172 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.172 05:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.172 [2024-12-09 05:28:06.038819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:19.172 [2024-12-09 05:28:06.039202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:19.172 [2024-12-09 05:28:06.039236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:19.172 [2024-12-09 05:28:06.039625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:19.172 [2024-12-09 05:28:06.039892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:19.172 [2024-12-09 05:28:06.039909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:37:19.172 [2024-12-09 05:28:06.040222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:19.172 BaseBdev3 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.172 [ 00:37:19.172 { 00:37:19.172 "name": "BaseBdev3", 00:37:19.172 "aliases": [ 00:37:19.172 "6f6dba7f-a2f1-4241-8513-751c8636f698" 00:37:19.172 ], 00:37:19.172 "product_name": "Malloc disk", 00:37:19.172 "block_size": 512, 00:37:19.172 "num_blocks": 65536, 00:37:19.172 "uuid": "6f6dba7f-a2f1-4241-8513-751c8636f698", 00:37:19.172 "assigned_rate_limits": { 00:37:19.172 "rw_ios_per_sec": 0, 00:37:19.172 "rw_mbytes_per_sec": 0, 00:37:19.172 "r_mbytes_per_sec": 0, 00:37:19.172 "w_mbytes_per_sec": 0 00:37:19.172 }, 00:37:19.172 "claimed": true, 00:37:19.172 "claim_type": "exclusive_write", 00:37:19.172 "zoned": false, 00:37:19.172 "supported_io_types": { 00:37:19.172 "read": true, 00:37:19.172 "write": true, 00:37:19.172 "unmap": true, 00:37:19.172 "flush": true, 00:37:19.172 "reset": true, 00:37:19.172 "nvme_admin": false, 00:37:19.172 "nvme_io": false, 00:37:19.172 "nvme_io_md": false, 00:37:19.172 "write_zeroes": true, 00:37:19.172 "zcopy": true, 00:37:19.172 "get_zone_info": false, 00:37:19.172 "zone_management": false, 00:37:19.172 "zone_append": false, 00:37:19.172 "compare": false, 00:37:19.172 "compare_and_write": false, 00:37:19.172 "abort": true, 00:37:19.172 "seek_hole": false, 00:37:19.172 "seek_data": false, 00:37:19.172 "copy": true, 00:37:19.172 "nvme_iov_md": false 00:37:19.172 }, 00:37:19.172 "memory_domains": [ 00:37:19.172 { 00:37:19.172 "dma_device_id": "system", 00:37:19.172 "dma_device_type": 1 00:37:19.172 }, 00:37:19.172 { 00:37:19.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.172 "dma_device_type": 2 00:37:19.172 } 00:37:19.172 ], 00:37:19.172 "driver_specific": {} 00:37:19.172 } 00:37:19.172 ] 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:19.172 "name": "Existed_Raid", 00:37:19.172 "uuid": "e59862d7-37cf-45ca-9843-6e79676b8e53", 00:37:19.172 "strip_size_kb": 0, 00:37:19.172 "state": "online", 00:37:19.172 "raid_level": "raid1", 00:37:19.172 "superblock": false, 00:37:19.172 "num_base_bdevs": 3, 00:37:19.172 "num_base_bdevs_discovered": 3, 00:37:19.172 "num_base_bdevs_operational": 3, 00:37:19.172 "base_bdevs_list": [ 00:37:19.172 { 00:37:19.172 "name": "BaseBdev1", 00:37:19.172 "uuid": "871e4e1a-52b9-475f-b594-1fd586052dd7", 00:37:19.172 "is_configured": true, 00:37:19.172 "data_offset": 0, 00:37:19.172 "data_size": 65536 00:37:19.172 }, 00:37:19.172 { 00:37:19.172 "name": "BaseBdev2", 00:37:19.172 "uuid": "85d459e1-e224-4942-9348-e7d6d90b1c90", 00:37:19.172 "is_configured": true, 00:37:19.172 "data_offset": 0, 00:37:19.172 "data_size": 65536 00:37:19.172 }, 00:37:19.172 { 00:37:19.172 "name": "BaseBdev3", 00:37:19.172 "uuid": "6f6dba7f-a2f1-4241-8513-751c8636f698", 00:37:19.172 "is_configured": true, 00:37:19.172 "data_offset": 0, 00:37:19.172 "data_size": 65536 00:37:19.172 } 00:37:19.172 ] 00:37:19.172 }' 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:19.172 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.739 [2024-12-09 05:28:06.603803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:19.739 "name": "Existed_Raid", 00:37:19.739 "aliases": [ 00:37:19.739 "e59862d7-37cf-45ca-9843-6e79676b8e53" 00:37:19.739 ], 00:37:19.739 "product_name": "Raid Volume", 00:37:19.739 "block_size": 512, 00:37:19.739 "num_blocks": 65536, 00:37:19.739 "uuid": "e59862d7-37cf-45ca-9843-6e79676b8e53", 00:37:19.739 "assigned_rate_limits": { 00:37:19.739 "rw_ios_per_sec": 0, 00:37:19.739 "rw_mbytes_per_sec": 0, 00:37:19.739 "r_mbytes_per_sec": 0, 00:37:19.739 "w_mbytes_per_sec": 0 00:37:19.739 }, 00:37:19.739 "claimed": false, 00:37:19.739 "zoned": false, 00:37:19.739 "supported_io_types": { 00:37:19.739 "read": true, 00:37:19.739 "write": true, 00:37:19.739 "unmap": false, 00:37:19.739 "flush": false, 00:37:19.739 "reset": true, 00:37:19.739 "nvme_admin": false, 00:37:19.739 "nvme_io": false, 00:37:19.739 "nvme_io_md": false, 00:37:19.739 "write_zeroes": true, 00:37:19.739 "zcopy": false, 00:37:19.739 "get_zone_info": false, 00:37:19.739 "zone_management": false, 00:37:19.739 "zone_append": false, 00:37:19.739 "compare": false, 00:37:19.739 "compare_and_write": false, 00:37:19.739 "abort": false, 00:37:19.739 "seek_hole": false, 00:37:19.739 "seek_data": false, 00:37:19.739 "copy": false, 00:37:19.739 "nvme_iov_md": false 00:37:19.739 }, 00:37:19.739 "memory_domains": [ 00:37:19.739 { 00:37:19.739 "dma_device_id": "system", 00:37:19.739 "dma_device_type": 1 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.739 "dma_device_type": 2 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "dma_device_id": "system", 00:37:19.739 "dma_device_type": 1 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.739 "dma_device_type": 2 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "dma_device_id": "system", 00:37:19.739 "dma_device_type": 1 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.739 "dma_device_type": 2 00:37:19.739 } 00:37:19.739 ], 00:37:19.739 "driver_specific": { 00:37:19.739 "raid": { 00:37:19.739 "uuid": "e59862d7-37cf-45ca-9843-6e79676b8e53", 00:37:19.739 "strip_size_kb": 0, 00:37:19.739 "state": "online", 00:37:19.739 "raid_level": "raid1", 00:37:19.739 "superblock": false, 00:37:19.739 "num_base_bdevs": 3, 00:37:19.739 "num_base_bdevs_discovered": 3, 00:37:19.739 "num_base_bdevs_operational": 3, 00:37:19.739 "base_bdevs_list": [ 00:37:19.739 { 00:37:19.739 "name": "BaseBdev1", 00:37:19.739 "uuid": "871e4e1a-52b9-475f-b594-1fd586052dd7", 00:37:19.739 "is_configured": true, 00:37:19.739 "data_offset": 0, 00:37:19.739 "data_size": 65536 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "name": "BaseBdev2", 00:37:19.739 "uuid": "85d459e1-e224-4942-9348-e7d6d90b1c90", 00:37:19.739 "is_configured": true, 00:37:19.739 "data_offset": 0, 00:37:19.739 "data_size": 65536 00:37:19.739 }, 00:37:19.739 { 00:37:19.739 "name": "BaseBdev3", 00:37:19.739 "uuid": "6f6dba7f-a2f1-4241-8513-751c8636f698", 00:37:19.739 "is_configured": true, 00:37:19.739 "data_offset": 0, 00:37:19.739 "data_size": 65536 00:37:19.739 } 00:37:19.739 ] 00:37:19.739 } 00:37:19.739 } 00:37:19.739 }' 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:19.739 BaseBdev2 00:37:19.739 BaseBdev3' 00:37:19.739 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.998 05:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.998 [2024-12-09 05:28:06.927625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.256 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:20.256 "name": "Existed_Raid", 00:37:20.256 "uuid": "e59862d7-37cf-45ca-9843-6e79676b8e53", 00:37:20.256 "strip_size_kb": 0, 00:37:20.256 "state": "online", 00:37:20.256 "raid_level": "raid1", 00:37:20.256 "superblock": false, 00:37:20.256 "num_base_bdevs": 3, 00:37:20.256 "num_base_bdevs_discovered": 2, 00:37:20.256 "num_base_bdevs_operational": 2, 00:37:20.256 "base_bdevs_list": [ 00:37:20.256 { 00:37:20.256 "name": null, 00:37:20.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.256 "is_configured": false, 00:37:20.256 "data_offset": 0, 00:37:20.256 "data_size": 65536 00:37:20.256 }, 00:37:20.256 { 00:37:20.256 "name": "BaseBdev2", 00:37:20.256 "uuid": "85d459e1-e224-4942-9348-e7d6d90b1c90", 00:37:20.256 "is_configured": true, 00:37:20.256 "data_offset": 0, 00:37:20.256 "data_size": 65536 00:37:20.256 }, 00:37:20.256 { 00:37:20.256 "name": "BaseBdev3", 00:37:20.256 "uuid": "6f6dba7f-a2f1-4241-8513-751c8636f698", 00:37:20.256 "is_configured": true, 00:37:20.256 "data_offset": 0, 00:37:20.256 "data_size": 65536 00:37:20.256 } 00:37:20.256 ] 00:37:20.257 }' 00:37:20.257 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:20.257 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.872 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.873 [2024-12-09 05:28:07.585553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.873 [2024-12-09 05:28:07.725710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:20.873 [2024-12-09 05:28:07.725864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:20.873 [2024-12-09 05:28:07.806777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:20.873 [2024-12-09 05:28:07.806858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:20.873 [2024-12-09 05:28:07.806879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.873 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.131 BaseBdev2 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.131 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.131 [ 00:37:21.131 { 00:37:21.131 "name": "BaseBdev2", 00:37:21.131 "aliases": [ 00:37:21.131 "0f73a489-7fe9-49cb-b3c2-cabe368d789c" 00:37:21.131 ], 00:37:21.131 "product_name": "Malloc disk", 00:37:21.131 "block_size": 512, 00:37:21.131 "num_blocks": 65536, 00:37:21.131 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:21.131 "assigned_rate_limits": { 00:37:21.131 "rw_ios_per_sec": 0, 00:37:21.131 "rw_mbytes_per_sec": 0, 00:37:21.131 "r_mbytes_per_sec": 0, 00:37:21.131 "w_mbytes_per_sec": 0 00:37:21.131 }, 00:37:21.131 "claimed": false, 00:37:21.131 "zoned": false, 00:37:21.131 "supported_io_types": { 00:37:21.131 "read": true, 00:37:21.131 "write": true, 00:37:21.131 "unmap": true, 00:37:21.131 "flush": true, 00:37:21.131 "reset": true, 00:37:21.131 "nvme_admin": false, 00:37:21.131 "nvme_io": false, 00:37:21.131 "nvme_io_md": false, 00:37:21.131 "write_zeroes": true, 00:37:21.131 "zcopy": true, 00:37:21.131 "get_zone_info": false, 00:37:21.131 "zone_management": false, 00:37:21.131 "zone_append": false, 00:37:21.131 "compare": false, 00:37:21.131 "compare_and_write": false, 00:37:21.131 "abort": true, 00:37:21.131 "seek_hole": false, 00:37:21.132 "seek_data": false, 00:37:21.132 "copy": true, 00:37:21.132 "nvme_iov_md": false 00:37:21.132 }, 00:37:21.132 "memory_domains": [ 00:37:21.132 { 00:37:21.132 "dma_device_id": "system", 00:37:21.132 "dma_device_type": 1 00:37:21.132 }, 00:37:21.132 { 00:37:21.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:21.132 "dma_device_type": 2 00:37:21.132 } 00:37:21.132 ], 00:37:21.132 "driver_specific": {} 00:37:21.132 } 00:37:21.132 ] 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.132 BaseBdev3 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.132 05:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.132 [ 00:37:21.132 { 00:37:21.132 "name": "BaseBdev3", 00:37:21.132 "aliases": [ 00:37:21.132 "1c51f53f-7802-48c6-be74-0a10cf5da051" 00:37:21.132 ], 00:37:21.132 "product_name": "Malloc disk", 00:37:21.132 "block_size": 512, 00:37:21.132 "num_blocks": 65536, 00:37:21.132 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:21.132 "assigned_rate_limits": { 00:37:21.132 "rw_ios_per_sec": 0, 00:37:21.132 "rw_mbytes_per_sec": 0, 00:37:21.132 "r_mbytes_per_sec": 0, 00:37:21.132 "w_mbytes_per_sec": 0 00:37:21.132 }, 00:37:21.132 "claimed": false, 00:37:21.132 "zoned": false, 00:37:21.132 "supported_io_types": { 00:37:21.132 "read": true, 00:37:21.132 "write": true, 00:37:21.132 "unmap": true, 00:37:21.132 "flush": true, 00:37:21.132 "reset": true, 00:37:21.132 "nvme_admin": false, 00:37:21.132 "nvme_io": false, 00:37:21.132 "nvme_io_md": false, 00:37:21.132 "write_zeroes": true, 00:37:21.132 "zcopy": true, 00:37:21.132 "get_zone_info": false, 00:37:21.132 "zone_management": false, 00:37:21.132 "zone_append": false, 00:37:21.132 "compare": false, 00:37:21.132 "compare_and_write": false, 00:37:21.132 "abort": true, 00:37:21.132 "seek_hole": false, 00:37:21.132 "seek_data": false, 00:37:21.132 "copy": true, 00:37:21.132 "nvme_iov_md": false 00:37:21.132 }, 00:37:21.132 "memory_domains": [ 00:37:21.132 { 00:37:21.132 "dma_device_id": "system", 00:37:21.132 "dma_device_type": 1 00:37:21.132 }, 00:37:21.132 { 00:37:21.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:21.132 "dma_device_type": 2 00:37:21.132 } 00:37:21.132 ], 00:37:21.132 "driver_specific": {} 00:37:21.132 } 00:37:21.132 ] 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.132 [2024-12-09 05:28:08.015256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:21.132 [2024-12-09 05:28:08.015570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:21.132 [2024-12-09 05:28:08.015761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:21.132 [2024-12-09 05:28:08.018286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.132 "name": "Existed_Raid", 00:37:21.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.132 "strip_size_kb": 0, 00:37:21.132 "state": "configuring", 00:37:21.132 "raid_level": "raid1", 00:37:21.132 "superblock": false, 00:37:21.132 "num_base_bdevs": 3, 00:37:21.132 "num_base_bdevs_discovered": 2, 00:37:21.132 "num_base_bdevs_operational": 3, 00:37:21.132 "base_bdevs_list": [ 00:37:21.132 { 00:37:21.132 "name": "BaseBdev1", 00:37:21.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.132 "is_configured": false, 00:37:21.132 "data_offset": 0, 00:37:21.132 "data_size": 0 00:37:21.132 }, 00:37:21.132 { 00:37:21.132 "name": "BaseBdev2", 00:37:21.132 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:21.132 "is_configured": true, 00:37:21.132 "data_offset": 0, 00:37:21.132 "data_size": 65536 00:37:21.132 }, 00:37:21.132 { 00:37:21.132 "name": "BaseBdev3", 00:37:21.132 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:21.132 "is_configured": true, 00:37:21.132 "data_offset": 0, 00:37:21.132 "data_size": 65536 00:37:21.132 } 00:37:21.132 ] 00:37:21.132 }' 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.132 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.697 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.698 [2024-12-09 05:28:08.543437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.698 "name": "Existed_Raid", 00:37:21.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.698 "strip_size_kb": 0, 00:37:21.698 "state": "configuring", 00:37:21.698 "raid_level": "raid1", 00:37:21.698 "superblock": false, 00:37:21.698 "num_base_bdevs": 3, 00:37:21.698 "num_base_bdevs_discovered": 1, 00:37:21.698 "num_base_bdevs_operational": 3, 00:37:21.698 "base_bdevs_list": [ 00:37:21.698 { 00:37:21.698 "name": "BaseBdev1", 00:37:21.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.698 "is_configured": false, 00:37:21.698 "data_offset": 0, 00:37:21.698 "data_size": 0 00:37:21.698 }, 00:37:21.698 { 00:37:21.698 "name": null, 00:37:21.698 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:21.698 "is_configured": false, 00:37:21.698 "data_offset": 0, 00:37:21.698 "data_size": 65536 00:37:21.698 }, 00:37:21.698 { 00:37:21.698 "name": "BaseBdev3", 00:37:21.698 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:21.698 "is_configured": true, 00:37:21.698 "data_offset": 0, 00:37:21.698 "data_size": 65536 00:37:21.698 } 00:37:21.698 ] 00:37:21.698 }' 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.698 05:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.264 [2024-12-09 05:28:09.169364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:22.264 BaseBdev1 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.264 [ 00:37:22.264 { 00:37:22.264 "name": "BaseBdev1", 00:37:22.264 "aliases": [ 00:37:22.264 "b2b422e3-eb61-479c-8c90-df3ec7b5782c" 00:37:22.264 ], 00:37:22.264 "product_name": "Malloc disk", 00:37:22.264 "block_size": 512, 00:37:22.264 "num_blocks": 65536, 00:37:22.264 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:22.264 "assigned_rate_limits": { 00:37:22.264 "rw_ios_per_sec": 0, 00:37:22.264 "rw_mbytes_per_sec": 0, 00:37:22.264 "r_mbytes_per_sec": 0, 00:37:22.264 "w_mbytes_per_sec": 0 00:37:22.264 }, 00:37:22.264 "claimed": true, 00:37:22.264 "claim_type": "exclusive_write", 00:37:22.264 "zoned": false, 00:37:22.264 "supported_io_types": { 00:37:22.264 "read": true, 00:37:22.264 "write": true, 00:37:22.264 "unmap": true, 00:37:22.264 "flush": true, 00:37:22.264 "reset": true, 00:37:22.264 "nvme_admin": false, 00:37:22.264 "nvme_io": false, 00:37:22.264 "nvme_io_md": false, 00:37:22.264 "write_zeroes": true, 00:37:22.264 "zcopy": true, 00:37:22.264 "get_zone_info": false, 00:37:22.264 "zone_management": false, 00:37:22.264 "zone_append": false, 00:37:22.264 "compare": false, 00:37:22.264 "compare_and_write": false, 00:37:22.264 "abort": true, 00:37:22.264 "seek_hole": false, 00:37:22.264 "seek_data": false, 00:37:22.264 "copy": true, 00:37:22.264 "nvme_iov_md": false 00:37:22.264 }, 00:37:22.264 "memory_domains": [ 00:37:22.264 { 00:37:22.264 "dma_device_id": "system", 00:37:22.264 "dma_device_type": 1 00:37:22.264 }, 00:37:22.264 { 00:37:22.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:22.264 "dma_device_type": 2 00:37:22.264 } 00:37:22.264 ], 00:37:22.264 "driver_specific": {} 00:37:22.264 } 00:37:22.264 ] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.264 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.523 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:22.523 "name": "Existed_Raid", 00:37:22.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.523 "strip_size_kb": 0, 00:37:22.523 "state": "configuring", 00:37:22.523 "raid_level": "raid1", 00:37:22.523 "superblock": false, 00:37:22.523 "num_base_bdevs": 3, 00:37:22.523 "num_base_bdevs_discovered": 2, 00:37:22.523 "num_base_bdevs_operational": 3, 00:37:22.523 "base_bdevs_list": [ 00:37:22.523 { 00:37:22.523 "name": "BaseBdev1", 00:37:22.523 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:22.523 "is_configured": true, 00:37:22.523 "data_offset": 0, 00:37:22.523 "data_size": 65536 00:37:22.523 }, 00:37:22.523 { 00:37:22.523 "name": null, 00:37:22.523 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:22.523 "is_configured": false, 00:37:22.523 "data_offset": 0, 00:37:22.523 "data_size": 65536 00:37:22.523 }, 00:37:22.523 { 00:37:22.523 "name": "BaseBdev3", 00:37:22.523 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:22.523 "is_configured": true, 00:37:22.523 "data_offset": 0, 00:37:22.523 "data_size": 65536 00:37:22.523 } 00:37:22.523 ] 00:37:22.523 }' 00:37:22.523 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:22.523 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.781 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:22.781 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.781 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.781 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.781 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.039 [2024-12-09 05:28:09.781514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:23.039 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:23.040 "name": "Existed_Raid", 00:37:23.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:23.040 "strip_size_kb": 0, 00:37:23.040 "state": "configuring", 00:37:23.040 "raid_level": "raid1", 00:37:23.040 "superblock": false, 00:37:23.040 "num_base_bdevs": 3, 00:37:23.040 "num_base_bdevs_discovered": 1, 00:37:23.040 "num_base_bdevs_operational": 3, 00:37:23.040 "base_bdevs_list": [ 00:37:23.040 { 00:37:23.040 "name": "BaseBdev1", 00:37:23.040 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:23.040 "is_configured": true, 00:37:23.040 "data_offset": 0, 00:37:23.040 "data_size": 65536 00:37:23.040 }, 00:37:23.040 { 00:37:23.040 "name": null, 00:37:23.040 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:23.040 "is_configured": false, 00:37:23.040 "data_offset": 0, 00:37:23.040 "data_size": 65536 00:37:23.040 }, 00:37:23.040 { 00:37:23.040 "name": null, 00:37:23.040 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:23.040 "is_configured": false, 00:37:23.040 "data_offset": 0, 00:37:23.040 "data_size": 65536 00:37:23.040 } 00:37:23.040 ] 00:37:23.040 }' 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:23.040 05:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.607 [2024-12-09 05:28:10.369724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:23.607 "name": "Existed_Raid", 00:37:23.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:23.607 "strip_size_kb": 0, 00:37:23.607 "state": "configuring", 00:37:23.607 "raid_level": "raid1", 00:37:23.607 "superblock": false, 00:37:23.607 "num_base_bdevs": 3, 00:37:23.607 "num_base_bdevs_discovered": 2, 00:37:23.607 "num_base_bdevs_operational": 3, 00:37:23.607 "base_bdevs_list": [ 00:37:23.607 { 00:37:23.607 "name": "BaseBdev1", 00:37:23.607 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:23.607 "is_configured": true, 00:37:23.607 "data_offset": 0, 00:37:23.607 "data_size": 65536 00:37:23.607 }, 00:37:23.607 { 00:37:23.607 "name": null, 00:37:23.607 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:23.607 "is_configured": false, 00:37:23.607 "data_offset": 0, 00:37:23.607 "data_size": 65536 00:37:23.607 }, 00:37:23.607 { 00:37:23.607 "name": "BaseBdev3", 00:37:23.607 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:23.607 "is_configured": true, 00:37:23.607 "data_offset": 0, 00:37:23.607 "data_size": 65536 00:37:23.607 } 00:37:23.607 ] 00:37:23.607 }' 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:23.607 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.175 05:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.175 [2024-12-09 05:28:10.945901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.175 "name": "Existed_Raid", 00:37:24.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.175 "strip_size_kb": 0, 00:37:24.175 "state": "configuring", 00:37:24.175 "raid_level": "raid1", 00:37:24.175 "superblock": false, 00:37:24.175 "num_base_bdevs": 3, 00:37:24.175 "num_base_bdevs_discovered": 1, 00:37:24.175 "num_base_bdevs_operational": 3, 00:37:24.175 "base_bdevs_list": [ 00:37:24.175 { 00:37:24.175 "name": null, 00:37:24.175 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:24.175 "is_configured": false, 00:37:24.175 "data_offset": 0, 00:37:24.175 "data_size": 65536 00:37:24.175 }, 00:37:24.175 { 00:37:24.175 "name": null, 00:37:24.175 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:24.175 "is_configured": false, 00:37:24.175 "data_offset": 0, 00:37:24.175 "data_size": 65536 00:37:24.175 }, 00:37:24.175 { 00:37:24.175 "name": "BaseBdev3", 00:37:24.175 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:24.175 "is_configured": true, 00:37:24.175 "data_offset": 0, 00:37:24.175 "data_size": 65536 00:37:24.175 } 00:37:24.175 ] 00:37:24.175 }' 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.175 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.784 [2024-12-09 05:28:11.598262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.784 "name": "Existed_Raid", 00:37:24.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.784 "strip_size_kb": 0, 00:37:24.784 "state": "configuring", 00:37:24.784 "raid_level": "raid1", 00:37:24.784 "superblock": false, 00:37:24.784 "num_base_bdevs": 3, 00:37:24.784 "num_base_bdevs_discovered": 2, 00:37:24.784 "num_base_bdevs_operational": 3, 00:37:24.784 "base_bdevs_list": [ 00:37:24.784 { 00:37:24.784 "name": null, 00:37:24.784 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:24.784 "is_configured": false, 00:37:24.784 "data_offset": 0, 00:37:24.784 "data_size": 65536 00:37:24.784 }, 00:37:24.784 { 00:37:24.784 "name": "BaseBdev2", 00:37:24.784 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:24.784 "is_configured": true, 00:37:24.784 "data_offset": 0, 00:37:24.784 "data_size": 65536 00:37:24.784 }, 00:37:24.784 { 00:37:24.784 "name": "BaseBdev3", 00:37:24.784 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:24.784 "is_configured": true, 00:37:24.784 "data_offset": 0, 00:37:24.784 "data_size": 65536 00:37:24.784 } 00:37:24.784 ] 00:37:24.784 }' 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.784 05:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2b422e3-eb61-479c-8c90-df3ec7b5782c 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 [2024-12-09 05:28:12.253728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:25.354 [2024-12-09 05:28:12.253826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:25.354 [2024-12-09 05:28:12.253840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:25.354 [2024-12-09 05:28:12.254254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:25.354 [2024-12-09 05:28:12.254501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:25.354 [2024-12-09 05:28:12.254522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:37:25.354 [2024-12-09 05:28:12.254875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:25.354 NewBaseBdev 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 [ 00:37:25.354 { 00:37:25.354 "name": "NewBaseBdev", 00:37:25.354 "aliases": [ 00:37:25.354 "b2b422e3-eb61-479c-8c90-df3ec7b5782c" 00:37:25.354 ], 00:37:25.354 "product_name": "Malloc disk", 00:37:25.354 "block_size": 512, 00:37:25.354 "num_blocks": 65536, 00:37:25.354 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:25.354 "assigned_rate_limits": { 00:37:25.354 "rw_ios_per_sec": 0, 00:37:25.354 "rw_mbytes_per_sec": 0, 00:37:25.354 "r_mbytes_per_sec": 0, 00:37:25.354 "w_mbytes_per_sec": 0 00:37:25.354 }, 00:37:25.354 "claimed": true, 00:37:25.354 "claim_type": "exclusive_write", 00:37:25.354 "zoned": false, 00:37:25.354 "supported_io_types": { 00:37:25.354 "read": true, 00:37:25.354 "write": true, 00:37:25.354 "unmap": true, 00:37:25.354 "flush": true, 00:37:25.354 "reset": true, 00:37:25.354 "nvme_admin": false, 00:37:25.354 "nvme_io": false, 00:37:25.354 "nvme_io_md": false, 00:37:25.354 "write_zeroes": true, 00:37:25.354 "zcopy": true, 00:37:25.354 "get_zone_info": false, 00:37:25.354 "zone_management": false, 00:37:25.354 "zone_append": false, 00:37:25.354 "compare": false, 00:37:25.354 "compare_and_write": false, 00:37:25.354 "abort": true, 00:37:25.354 "seek_hole": false, 00:37:25.354 "seek_data": false, 00:37:25.354 "copy": true, 00:37:25.354 "nvme_iov_md": false 00:37:25.354 }, 00:37:25.354 "memory_domains": [ 00:37:25.354 { 00:37:25.354 "dma_device_id": "system", 00:37:25.354 "dma_device_type": 1 00:37:25.354 }, 00:37:25.354 { 00:37:25.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:25.354 "dma_device_type": 2 00:37:25.354 } 00:37:25.354 ], 00:37:25.354 "driver_specific": {} 00:37:25.354 } 00:37:25.354 ] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.354 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.613 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:25.613 "name": "Existed_Raid", 00:37:25.613 "uuid": "0b0562ca-2dd5-4bd2-82df-2bf271c8c1bd", 00:37:25.613 "strip_size_kb": 0, 00:37:25.613 "state": "online", 00:37:25.613 "raid_level": "raid1", 00:37:25.613 "superblock": false, 00:37:25.613 "num_base_bdevs": 3, 00:37:25.613 "num_base_bdevs_discovered": 3, 00:37:25.613 "num_base_bdevs_operational": 3, 00:37:25.613 "base_bdevs_list": [ 00:37:25.613 { 00:37:25.613 "name": "NewBaseBdev", 00:37:25.613 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:25.613 "is_configured": true, 00:37:25.613 "data_offset": 0, 00:37:25.613 "data_size": 65536 00:37:25.613 }, 00:37:25.613 { 00:37:25.613 "name": "BaseBdev2", 00:37:25.613 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:25.613 "is_configured": true, 00:37:25.613 "data_offset": 0, 00:37:25.613 "data_size": 65536 00:37:25.613 }, 00:37:25.613 { 00:37:25.613 "name": "BaseBdev3", 00:37:25.614 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:25.614 "is_configured": true, 00:37:25.614 "data_offset": 0, 00:37:25.614 "data_size": 65536 00:37:25.614 } 00:37:25.614 ] 00:37:25.614 }' 00:37:25.614 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:25.614 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.873 [2024-12-09 05:28:12.814389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:25.873 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.132 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:26.132 "name": "Existed_Raid", 00:37:26.132 "aliases": [ 00:37:26.132 "0b0562ca-2dd5-4bd2-82df-2bf271c8c1bd" 00:37:26.132 ], 00:37:26.132 "product_name": "Raid Volume", 00:37:26.132 "block_size": 512, 00:37:26.132 "num_blocks": 65536, 00:37:26.132 "uuid": "0b0562ca-2dd5-4bd2-82df-2bf271c8c1bd", 00:37:26.132 "assigned_rate_limits": { 00:37:26.132 "rw_ios_per_sec": 0, 00:37:26.132 "rw_mbytes_per_sec": 0, 00:37:26.132 "r_mbytes_per_sec": 0, 00:37:26.132 "w_mbytes_per_sec": 0 00:37:26.132 }, 00:37:26.132 "claimed": false, 00:37:26.132 "zoned": false, 00:37:26.132 "supported_io_types": { 00:37:26.132 "read": true, 00:37:26.132 "write": true, 00:37:26.132 "unmap": false, 00:37:26.132 "flush": false, 00:37:26.132 "reset": true, 00:37:26.132 "nvme_admin": false, 00:37:26.132 "nvme_io": false, 00:37:26.132 "nvme_io_md": false, 00:37:26.132 "write_zeroes": true, 00:37:26.132 "zcopy": false, 00:37:26.132 "get_zone_info": false, 00:37:26.132 "zone_management": false, 00:37:26.132 "zone_append": false, 00:37:26.132 "compare": false, 00:37:26.132 "compare_and_write": false, 00:37:26.132 "abort": false, 00:37:26.132 "seek_hole": false, 00:37:26.132 "seek_data": false, 00:37:26.132 "copy": false, 00:37:26.132 "nvme_iov_md": false 00:37:26.132 }, 00:37:26.132 "memory_domains": [ 00:37:26.132 { 00:37:26.132 "dma_device_id": "system", 00:37:26.132 "dma_device_type": 1 00:37:26.132 }, 00:37:26.132 { 00:37:26.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:26.132 "dma_device_type": 2 00:37:26.132 }, 00:37:26.132 { 00:37:26.132 "dma_device_id": "system", 00:37:26.132 "dma_device_type": 1 00:37:26.132 }, 00:37:26.132 { 00:37:26.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:26.132 "dma_device_type": 2 00:37:26.132 }, 00:37:26.132 { 00:37:26.132 "dma_device_id": "system", 00:37:26.132 "dma_device_type": 1 00:37:26.132 }, 00:37:26.132 { 00:37:26.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:26.132 "dma_device_type": 2 00:37:26.132 } 00:37:26.132 ], 00:37:26.132 "driver_specific": { 00:37:26.132 "raid": { 00:37:26.132 "uuid": "0b0562ca-2dd5-4bd2-82df-2bf271c8c1bd", 00:37:26.132 "strip_size_kb": 0, 00:37:26.132 "state": "online", 00:37:26.132 "raid_level": "raid1", 00:37:26.132 "superblock": false, 00:37:26.132 "num_base_bdevs": 3, 00:37:26.133 "num_base_bdevs_discovered": 3, 00:37:26.133 "num_base_bdevs_operational": 3, 00:37:26.133 "base_bdevs_list": [ 00:37:26.133 { 00:37:26.133 "name": "NewBaseBdev", 00:37:26.133 "uuid": "b2b422e3-eb61-479c-8c90-df3ec7b5782c", 00:37:26.133 "is_configured": true, 00:37:26.133 "data_offset": 0, 00:37:26.133 "data_size": 65536 00:37:26.133 }, 00:37:26.133 { 00:37:26.133 "name": "BaseBdev2", 00:37:26.133 "uuid": "0f73a489-7fe9-49cb-b3c2-cabe368d789c", 00:37:26.133 "is_configured": true, 00:37:26.133 "data_offset": 0, 00:37:26.133 "data_size": 65536 00:37:26.133 }, 00:37:26.133 { 00:37:26.133 "name": "BaseBdev3", 00:37:26.133 "uuid": "1c51f53f-7802-48c6-be74-0a10cf5da051", 00:37:26.133 "is_configured": true, 00:37:26.133 "data_offset": 0, 00:37:26.133 "data_size": 65536 00:37:26.133 } 00:37:26.133 ] 00:37:26.133 } 00:37:26.133 } 00:37:26.133 }' 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:26.133 BaseBdev2 00:37:26.133 BaseBdev3' 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.133 05:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.133 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.392 [2024-12-09 05:28:13.125990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:26.392 [2024-12-09 05:28:13.126179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:26.392 [2024-12-09 05:28:13.126284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:26.392 [2024-12-09 05:28:13.126700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:26.392 [2024-12-09 05:28:13.126715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67502 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67502 ']' 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67502 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67502 00:37:26.392 killing process with pid 67502 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:26.392 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:26.393 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67502' 00:37:26.393 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67502 00:37:26.393 [2024-12-09 05:28:13.166273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:26.393 05:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67502 00:37:26.652 [2024-12-09 05:28:13.403509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:27.588 05:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:37:27.588 00:37:27.588 real 0m11.877s 00:37:27.588 user 0m19.604s 00:37:27.588 sys 0m1.781s 00:37:27.588 05:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.588 05:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.588 ************************************ 00:37:27.588 END TEST raid_state_function_test 00:37:27.588 ************************************ 00:37:27.588 05:28:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:37:27.588 05:28:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:27.588 05:28:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.588 05:28:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:27.847 ************************************ 00:37:27.847 START TEST raid_state_function_test_sb 00:37:27.847 ************************************ 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68135 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:27.847 Process raid pid: 68135 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68135' 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68135 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68135 ']' 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.847 05:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:27.847 [2024-12-09 05:28:14.682258] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:27.847 [2024-12-09 05:28:14.682458] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.106 [2024-12-09 05:28:14.882304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.106 [2024-12-09 05:28:15.041610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.364 [2024-12-09 05:28:15.257823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:28.364 [2024-12-09 05:28:15.258159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.929 [2024-12-09 05:28:15.653949] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:28.929 [2024-12-09 05:28:15.654031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:28.929 [2024-12-09 05:28:15.654047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:28.929 [2024-12-09 05:28:15.654088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:28.929 [2024-12-09 05:28:15.654100] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:28.929 [2024-12-09 05:28:15.654115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:28.929 "name": "Existed_Raid", 00:37:28.929 "uuid": "245be0a3-9d32-485a-9b47-ce2583b12b46", 00:37:28.929 "strip_size_kb": 0, 00:37:28.929 "state": "configuring", 00:37:28.929 "raid_level": "raid1", 00:37:28.929 "superblock": true, 00:37:28.929 "num_base_bdevs": 3, 00:37:28.929 "num_base_bdevs_discovered": 0, 00:37:28.929 "num_base_bdevs_operational": 3, 00:37:28.929 "base_bdevs_list": [ 00:37:28.929 { 00:37:28.929 "name": "BaseBdev1", 00:37:28.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.929 "is_configured": false, 00:37:28.929 "data_offset": 0, 00:37:28.929 "data_size": 0 00:37:28.929 }, 00:37:28.929 { 00:37:28.929 "name": "BaseBdev2", 00:37:28.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.929 "is_configured": false, 00:37:28.929 "data_offset": 0, 00:37:28.929 "data_size": 0 00:37:28.929 }, 00:37:28.929 { 00:37:28.929 "name": "BaseBdev3", 00:37:28.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.929 "is_configured": false, 00:37:28.929 "data_offset": 0, 00:37:28.929 "data_size": 0 00:37:28.929 } 00:37:28.929 ] 00:37:28.929 }' 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:28.929 05:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 [2024-12-09 05:28:16.186136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:29.496 [2024-12-09 05:28:16.186359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 [2024-12-09 05:28:16.194125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:29.496 [2024-12-09 05:28:16.194181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:29.496 [2024-12-09 05:28:16.194198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:29.496 [2024-12-09 05:28:16.194215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:29.496 [2024-12-09 05:28:16.194225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:29.496 [2024-12-09 05:28:16.194240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 [2024-12-09 05:28:16.241563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:29.496 BaseBdev1 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 [ 00:37:29.496 { 00:37:29.496 "name": "BaseBdev1", 00:37:29.496 "aliases": [ 00:37:29.496 "d0d54c00-980a-4a10-a75d-0ea075c961d7" 00:37:29.496 ], 00:37:29.496 "product_name": "Malloc disk", 00:37:29.496 "block_size": 512, 00:37:29.496 "num_blocks": 65536, 00:37:29.496 "uuid": "d0d54c00-980a-4a10-a75d-0ea075c961d7", 00:37:29.496 "assigned_rate_limits": { 00:37:29.496 "rw_ios_per_sec": 0, 00:37:29.496 "rw_mbytes_per_sec": 0, 00:37:29.496 "r_mbytes_per_sec": 0, 00:37:29.496 "w_mbytes_per_sec": 0 00:37:29.496 }, 00:37:29.496 "claimed": true, 00:37:29.496 "claim_type": "exclusive_write", 00:37:29.496 "zoned": false, 00:37:29.496 "supported_io_types": { 00:37:29.496 "read": true, 00:37:29.496 "write": true, 00:37:29.496 "unmap": true, 00:37:29.496 "flush": true, 00:37:29.496 "reset": true, 00:37:29.496 "nvme_admin": false, 00:37:29.496 "nvme_io": false, 00:37:29.496 "nvme_io_md": false, 00:37:29.496 "write_zeroes": true, 00:37:29.496 "zcopy": true, 00:37:29.496 "get_zone_info": false, 00:37:29.496 "zone_management": false, 00:37:29.496 "zone_append": false, 00:37:29.496 "compare": false, 00:37:29.496 "compare_and_write": false, 00:37:29.496 "abort": true, 00:37:29.496 "seek_hole": false, 00:37:29.496 "seek_data": false, 00:37:29.496 "copy": true, 00:37:29.496 "nvme_iov_md": false 00:37:29.496 }, 00:37:29.496 "memory_domains": [ 00:37:29.496 { 00:37:29.496 "dma_device_id": "system", 00:37:29.496 "dma_device_type": 1 00:37:29.496 }, 00:37:29.496 { 00:37:29.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:29.496 "dma_device_type": 2 00:37:29.496 } 00:37:29.496 ], 00:37:29.496 "driver_specific": {} 00:37:29.496 } 00:37:29.496 ] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.496 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:29.496 "name": "Existed_Raid", 00:37:29.496 "uuid": "82fb649b-7cd9-407a-aa21-e1a260dcdc03", 00:37:29.496 "strip_size_kb": 0, 00:37:29.497 "state": "configuring", 00:37:29.497 "raid_level": "raid1", 00:37:29.497 "superblock": true, 00:37:29.497 "num_base_bdevs": 3, 00:37:29.497 "num_base_bdevs_discovered": 1, 00:37:29.497 "num_base_bdevs_operational": 3, 00:37:29.497 "base_bdevs_list": [ 00:37:29.497 { 00:37:29.497 "name": "BaseBdev1", 00:37:29.497 "uuid": "d0d54c00-980a-4a10-a75d-0ea075c961d7", 00:37:29.497 "is_configured": true, 00:37:29.497 "data_offset": 2048, 00:37:29.497 "data_size": 63488 00:37:29.497 }, 00:37:29.497 { 00:37:29.497 "name": "BaseBdev2", 00:37:29.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.497 "is_configured": false, 00:37:29.497 "data_offset": 0, 00:37:29.497 "data_size": 0 00:37:29.497 }, 00:37:29.497 { 00:37:29.497 "name": "BaseBdev3", 00:37:29.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.497 "is_configured": false, 00:37:29.497 "data_offset": 0, 00:37:29.497 "data_size": 0 00:37:29.497 } 00:37:29.497 ] 00:37:29.497 }' 00:37:29.497 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:29.497 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.063 [2024-12-09 05:28:16.773798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:30.063 [2024-12-09 05:28:16.773878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.063 [2024-12-09 05:28:16.781793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:30.063 [2024-12-09 05:28:16.784419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:30.063 [2024-12-09 05:28:16.784616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:30.063 [2024-12-09 05:28:16.784733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:30.063 [2024-12-09 05:28:16.784937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:30.063 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:30.064 "name": "Existed_Raid", 00:37:30.064 "uuid": "7dc22411-797e-471d-82da-001d556e5d97", 00:37:30.064 "strip_size_kb": 0, 00:37:30.064 "state": "configuring", 00:37:30.064 "raid_level": "raid1", 00:37:30.064 "superblock": true, 00:37:30.064 "num_base_bdevs": 3, 00:37:30.064 "num_base_bdevs_discovered": 1, 00:37:30.064 "num_base_bdevs_operational": 3, 00:37:30.064 "base_bdevs_list": [ 00:37:30.064 { 00:37:30.064 "name": "BaseBdev1", 00:37:30.064 "uuid": "d0d54c00-980a-4a10-a75d-0ea075c961d7", 00:37:30.064 "is_configured": true, 00:37:30.064 "data_offset": 2048, 00:37:30.064 "data_size": 63488 00:37:30.064 }, 00:37:30.064 { 00:37:30.064 "name": "BaseBdev2", 00:37:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:30.064 "is_configured": false, 00:37:30.064 "data_offset": 0, 00:37:30.064 "data_size": 0 00:37:30.064 }, 00:37:30.064 { 00:37:30.064 "name": "BaseBdev3", 00:37:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:30.064 "is_configured": false, 00:37:30.064 "data_offset": 0, 00:37:30.064 "data_size": 0 00:37:30.064 } 00:37:30.064 ] 00:37:30.064 }' 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:30.064 05:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.629 [2024-12-09 05:28:17.347020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:30.629 BaseBdev2 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.629 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.629 [ 00:37:30.629 { 00:37:30.629 "name": "BaseBdev2", 00:37:30.629 "aliases": [ 00:37:30.629 "ecf7ac57-27ad-4b67-a8f8-8f858bff9cda" 00:37:30.629 ], 00:37:30.629 "product_name": "Malloc disk", 00:37:30.629 "block_size": 512, 00:37:30.630 "num_blocks": 65536, 00:37:30.630 "uuid": "ecf7ac57-27ad-4b67-a8f8-8f858bff9cda", 00:37:30.630 "assigned_rate_limits": { 00:37:30.630 "rw_ios_per_sec": 0, 00:37:30.630 "rw_mbytes_per_sec": 0, 00:37:30.630 "r_mbytes_per_sec": 0, 00:37:30.630 "w_mbytes_per_sec": 0 00:37:30.630 }, 00:37:30.630 "claimed": true, 00:37:30.630 "claim_type": "exclusive_write", 00:37:30.630 "zoned": false, 00:37:30.630 "supported_io_types": { 00:37:30.630 "read": true, 00:37:30.630 "write": true, 00:37:30.630 "unmap": true, 00:37:30.630 "flush": true, 00:37:30.630 "reset": true, 00:37:30.630 "nvme_admin": false, 00:37:30.630 "nvme_io": false, 00:37:30.630 "nvme_io_md": false, 00:37:30.630 "write_zeroes": true, 00:37:30.630 "zcopy": true, 00:37:30.630 "get_zone_info": false, 00:37:30.630 "zone_management": false, 00:37:30.630 "zone_append": false, 00:37:30.630 "compare": false, 00:37:30.630 "compare_and_write": false, 00:37:30.630 "abort": true, 00:37:30.630 "seek_hole": false, 00:37:30.630 "seek_data": false, 00:37:30.630 "copy": true, 00:37:30.630 "nvme_iov_md": false 00:37:30.630 }, 00:37:30.630 "memory_domains": [ 00:37:30.630 { 00:37:30.630 "dma_device_id": "system", 00:37:30.630 "dma_device_type": 1 00:37:30.630 }, 00:37:30.630 { 00:37:30.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.630 "dma_device_type": 2 00:37:30.630 } 00:37:30.630 ], 00:37:30.630 "driver_specific": {} 00:37:30.630 } 00:37:30.630 ] 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:30.630 "name": "Existed_Raid", 00:37:30.630 "uuid": "7dc22411-797e-471d-82da-001d556e5d97", 00:37:30.630 "strip_size_kb": 0, 00:37:30.630 "state": "configuring", 00:37:30.630 "raid_level": "raid1", 00:37:30.630 "superblock": true, 00:37:30.630 "num_base_bdevs": 3, 00:37:30.630 "num_base_bdevs_discovered": 2, 00:37:30.630 "num_base_bdevs_operational": 3, 00:37:30.630 "base_bdevs_list": [ 00:37:30.630 { 00:37:30.630 "name": "BaseBdev1", 00:37:30.630 "uuid": "d0d54c00-980a-4a10-a75d-0ea075c961d7", 00:37:30.630 "is_configured": true, 00:37:30.630 "data_offset": 2048, 00:37:30.630 "data_size": 63488 00:37:30.630 }, 00:37:30.630 { 00:37:30.630 "name": "BaseBdev2", 00:37:30.630 "uuid": "ecf7ac57-27ad-4b67-a8f8-8f858bff9cda", 00:37:30.630 "is_configured": true, 00:37:30.630 "data_offset": 2048, 00:37:30.630 "data_size": 63488 00:37:30.630 }, 00:37:30.630 { 00:37:30.630 "name": "BaseBdev3", 00:37:30.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:30.630 "is_configured": false, 00:37:30.630 "data_offset": 0, 00:37:30.630 "data_size": 0 00:37:30.630 } 00:37:30.630 ] 00:37:30.630 }' 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:30.630 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.195 [2024-12-09 05:28:17.946112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:31.195 [2024-12-09 05:28:17.946406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:31.195 [2024-12-09 05:28:17.946433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:31.195 BaseBdev3 00:37:31.195 [2024-12-09 05:28:17.946772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:31.195 [2024-12-09 05:28:17.946984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:31.195 [2024-12-09 05:28:17.947000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:37:31.195 [2024-12-09 05:28:17.947161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.195 [ 00:37:31.195 { 00:37:31.195 "name": "BaseBdev3", 00:37:31.195 "aliases": [ 00:37:31.195 "d695dcd6-3b2f-4dce-893d-b394a5b57344" 00:37:31.195 ], 00:37:31.195 "product_name": "Malloc disk", 00:37:31.195 "block_size": 512, 00:37:31.195 "num_blocks": 65536, 00:37:31.195 "uuid": "d695dcd6-3b2f-4dce-893d-b394a5b57344", 00:37:31.195 "assigned_rate_limits": { 00:37:31.195 "rw_ios_per_sec": 0, 00:37:31.195 "rw_mbytes_per_sec": 0, 00:37:31.195 "r_mbytes_per_sec": 0, 00:37:31.195 "w_mbytes_per_sec": 0 00:37:31.195 }, 00:37:31.195 "claimed": true, 00:37:31.195 "claim_type": "exclusive_write", 00:37:31.195 "zoned": false, 00:37:31.195 "supported_io_types": { 00:37:31.195 "read": true, 00:37:31.195 "write": true, 00:37:31.195 "unmap": true, 00:37:31.195 "flush": true, 00:37:31.195 "reset": true, 00:37:31.195 "nvme_admin": false, 00:37:31.195 "nvme_io": false, 00:37:31.195 "nvme_io_md": false, 00:37:31.195 "write_zeroes": true, 00:37:31.195 "zcopy": true, 00:37:31.195 "get_zone_info": false, 00:37:31.195 "zone_management": false, 00:37:31.195 "zone_append": false, 00:37:31.195 "compare": false, 00:37:31.195 "compare_and_write": false, 00:37:31.195 "abort": true, 00:37:31.195 "seek_hole": false, 00:37:31.195 "seek_data": false, 00:37:31.195 "copy": true, 00:37:31.195 "nvme_iov_md": false 00:37:31.195 }, 00:37:31.195 "memory_domains": [ 00:37:31.195 { 00:37:31.195 "dma_device_id": "system", 00:37:31.195 "dma_device_type": 1 00:37:31.195 }, 00:37:31.195 { 00:37:31.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.195 "dma_device_type": 2 00:37:31.195 } 00:37:31.195 ], 00:37:31.195 "driver_specific": {} 00:37:31.195 } 00:37:31.195 ] 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.195 05:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.195 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.195 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:31.195 "name": "Existed_Raid", 00:37:31.195 "uuid": "7dc22411-797e-471d-82da-001d556e5d97", 00:37:31.195 "strip_size_kb": 0, 00:37:31.195 "state": "online", 00:37:31.195 "raid_level": "raid1", 00:37:31.195 "superblock": true, 00:37:31.195 "num_base_bdevs": 3, 00:37:31.195 "num_base_bdevs_discovered": 3, 00:37:31.195 "num_base_bdevs_operational": 3, 00:37:31.195 "base_bdevs_list": [ 00:37:31.195 { 00:37:31.195 "name": "BaseBdev1", 00:37:31.195 "uuid": "d0d54c00-980a-4a10-a75d-0ea075c961d7", 00:37:31.195 "is_configured": true, 00:37:31.195 "data_offset": 2048, 00:37:31.195 "data_size": 63488 00:37:31.195 }, 00:37:31.195 { 00:37:31.195 "name": "BaseBdev2", 00:37:31.195 "uuid": "ecf7ac57-27ad-4b67-a8f8-8f858bff9cda", 00:37:31.195 "is_configured": true, 00:37:31.195 "data_offset": 2048, 00:37:31.195 "data_size": 63488 00:37:31.195 }, 00:37:31.195 { 00:37:31.195 "name": "BaseBdev3", 00:37:31.195 "uuid": "d695dcd6-3b2f-4dce-893d-b394a5b57344", 00:37:31.195 "is_configured": true, 00:37:31.195 "data_offset": 2048, 00:37:31.195 "data_size": 63488 00:37:31.195 } 00:37:31.195 ] 00:37:31.195 }' 00:37:31.195 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:31.195 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.761 [2024-12-09 05:28:18.490628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:31.761 "name": "Existed_Raid", 00:37:31.761 "aliases": [ 00:37:31.761 "7dc22411-797e-471d-82da-001d556e5d97" 00:37:31.761 ], 00:37:31.761 "product_name": "Raid Volume", 00:37:31.761 "block_size": 512, 00:37:31.761 "num_blocks": 63488, 00:37:31.761 "uuid": "7dc22411-797e-471d-82da-001d556e5d97", 00:37:31.761 "assigned_rate_limits": { 00:37:31.761 "rw_ios_per_sec": 0, 00:37:31.761 "rw_mbytes_per_sec": 0, 00:37:31.761 "r_mbytes_per_sec": 0, 00:37:31.761 "w_mbytes_per_sec": 0 00:37:31.761 }, 00:37:31.761 "claimed": false, 00:37:31.761 "zoned": false, 00:37:31.761 "supported_io_types": { 00:37:31.761 "read": true, 00:37:31.761 "write": true, 00:37:31.761 "unmap": false, 00:37:31.761 "flush": false, 00:37:31.761 "reset": true, 00:37:31.761 "nvme_admin": false, 00:37:31.761 "nvme_io": false, 00:37:31.761 "nvme_io_md": false, 00:37:31.761 "write_zeroes": true, 00:37:31.761 "zcopy": false, 00:37:31.761 "get_zone_info": false, 00:37:31.761 "zone_management": false, 00:37:31.761 "zone_append": false, 00:37:31.761 "compare": false, 00:37:31.761 "compare_and_write": false, 00:37:31.761 "abort": false, 00:37:31.761 "seek_hole": false, 00:37:31.761 "seek_data": false, 00:37:31.761 "copy": false, 00:37:31.761 "nvme_iov_md": false 00:37:31.761 }, 00:37:31.761 "memory_domains": [ 00:37:31.761 { 00:37:31.761 "dma_device_id": "system", 00:37:31.761 "dma_device_type": 1 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.761 "dma_device_type": 2 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "dma_device_id": "system", 00:37:31.761 "dma_device_type": 1 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.761 "dma_device_type": 2 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "dma_device_id": "system", 00:37:31.761 "dma_device_type": 1 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.761 "dma_device_type": 2 00:37:31.761 } 00:37:31.761 ], 00:37:31.761 "driver_specific": { 00:37:31.761 "raid": { 00:37:31.761 "uuid": "7dc22411-797e-471d-82da-001d556e5d97", 00:37:31.761 "strip_size_kb": 0, 00:37:31.761 "state": "online", 00:37:31.761 "raid_level": "raid1", 00:37:31.761 "superblock": true, 00:37:31.761 "num_base_bdevs": 3, 00:37:31.761 "num_base_bdevs_discovered": 3, 00:37:31.761 "num_base_bdevs_operational": 3, 00:37:31.761 "base_bdevs_list": [ 00:37:31.761 { 00:37:31.761 "name": "BaseBdev1", 00:37:31.761 "uuid": "d0d54c00-980a-4a10-a75d-0ea075c961d7", 00:37:31.761 "is_configured": true, 00:37:31.761 "data_offset": 2048, 00:37:31.761 "data_size": 63488 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "name": "BaseBdev2", 00:37:31.761 "uuid": "ecf7ac57-27ad-4b67-a8f8-8f858bff9cda", 00:37:31.761 "is_configured": true, 00:37:31.761 "data_offset": 2048, 00:37:31.761 "data_size": 63488 00:37:31.761 }, 00:37:31.761 { 00:37:31.761 "name": "BaseBdev3", 00:37:31.761 "uuid": "d695dcd6-3b2f-4dce-893d-b394a5b57344", 00:37:31.761 "is_configured": true, 00:37:31.761 "data_offset": 2048, 00:37:31.761 "data_size": 63488 00:37:31.761 } 00:37:31.761 ] 00:37:31.761 } 00:37:31.761 } 00:37:31.761 }' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:31.761 BaseBdev2 00:37:31.761 BaseBdev3' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.761 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.020 [2024-12-09 05:28:18.794450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:32.020 "name": "Existed_Raid", 00:37:32.020 "uuid": "7dc22411-797e-471d-82da-001d556e5d97", 00:37:32.020 "strip_size_kb": 0, 00:37:32.020 "state": "online", 00:37:32.020 "raid_level": "raid1", 00:37:32.020 "superblock": true, 00:37:32.020 "num_base_bdevs": 3, 00:37:32.020 "num_base_bdevs_discovered": 2, 00:37:32.020 "num_base_bdevs_operational": 2, 00:37:32.020 "base_bdevs_list": [ 00:37:32.020 { 00:37:32.020 "name": null, 00:37:32.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.020 "is_configured": false, 00:37:32.020 "data_offset": 0, 00:37:32.020 "data_size": 63488 00:37:32.020 }, 00:37:32.020 { 00:37:32.020 "name": "BaseBdev2", 00:37:32.020 "uuid": "ecf7ac57-27ad-4b67-a8f8-8f858bff9cda", 00:37:32.020 "is_configured": true, 00:37:32.020 "data_offset": 2048, 00:37:32.020 "data_size": 63488 00:37:32.020 }, 00:37:32.020 { 00:37:32.020 "name": "BaseBdev3", 00:37:32.020 "uuid": "d695dcd6-3b2f-4dce-893d-b394a5b57344", 00:37:32.020 "is_configured": true, 00:37:32.020 "data_offset": 2048, 00:37:32.020 "data_size": 63488 00:37:32.020 } 00:37:32.020 ] 00:37:32.020 }' 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:32.020 05:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.588 [2024-12-09 05:28:19.451606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.588 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.858 [2024-12-09 05:28:19.589003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:32.858 [2024-12-09 05:28:19.589163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:32.858 [2024-12-09 05:28:19.672708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:32.858 [2024-12-09 05:28:19.672796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:32.858 [2024-12-09 05:28:19.672830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:32.858 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.859 BaseBdev2 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:32.859 [ 00:37:32.859 { 00:37:32.859 "name": "BaseBdev2", 00:37:32.859 "aliases": [ 00:37:32.859 "36f8e0b4-8430-4485-a115-1395ac14534f" 00:37:32.859 ], 00:37:32.859 "product_name": "Malloc disk", 00:37:32.859 "block_size": 512, 00:37:32.859 "num_blocks": 65536, 00:37:32.859 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:32.859 "assigned_rate_limits": { 00:37:32.859 "rw_ios_per_sec": 0, 00:37:32.859 "rw_mbytes_per_sec": 0, 00:37:32.859 "r_mbytes_per_sec": 0, 00:37:32.859 "w_mbytes_per_sec": 0 00:37:32.859 }, 00:37:32.859 "claimed": false, 00:37:32.859 "zoned": false, 00:37:32.859 "supported_io_types": { 00:37:32.859 "read": true, 00:37:32.859 "write": true, 00:37:32.859 "unmap": true, 00:37:32.859 "flush": true, 00:37:32.859 "reset": true, 00:37:32.859 "nvme_admin": false, 00:37:32.859 "nvme_io": false, 00:37:32.859 "nvme_io_md": false, 00:37:32.859 "write_zeroes": true, 00:37:32.859 "zcopy": true, 00:37:32.859 "get_zone_info": false, 00:37:32.859 "zone_management": false, 00:37:32.859 "zone_append": false, 00:37:32.859 "compare": false, 00:37:32.859 "compare_and_write": false, 00:37:32.859 "abort": true, 00:37:32.859 "seek_hole": false, 00:37:32.859 "seek_data": false, 00:37:32.859 "copy": true, 00:37:32.859 "nvme_iov_md": false 00:37:32.859 }, 00:37:32.859 "memory_domains": [ 00:37:32.859 { 00:37:32.859 "dma_device_id": "system", 00:37:32.859 "dma_device_type": 1 00:37:32.859 }, 00:37:32.859 { 00:37:32.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.859 "dma_device_type": 2 00:37:32.859 } 00:37:32.859 ], 00:37:32.859 "driver_specific": {} 00:37:32.859 } 00:37:32.859 ] 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.859 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.130 BaseBdev3 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.130 [ 00:37:33.130 { 00:37:33.130 "name": "BaseBdev3", 00:37:33.130 "aliases": [ 00:37:33.130 "81f3cc91-acd0-4eb6-88d6-9a9719d5757c" 00:37:33.130 ], 00:37:33.130 "product_name": "Malloc disk", 00:37:33.130 "block_size": 512, 00:37:33.130 "num_blocks": 65536, 00:37:33.130 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:33.130 "assigned_rate_limits": { 00:37:33.130 "rw_ios_per_sec": 0, 00:37:33.130 "rw_mbytes_per_sec": 0, 00:37:33.130 "r_mbytes_per_sec": 0, 00:37:33.130 "w_mbytes_per_sec": 0 00:37:33.130 }, 00:37:33.130 "claimed": false, 00:37:33.130 "zoned": false, 00:37:33.130 "supported_io_types": { 00:37:33.130 "read": true, 00:37:33.130 "write": true, 00:37:33.130 "unmap": true, 00:37:33.130 "flush": true, 00:37:33.130 "reset": true, 00:37:33.130 "nvme_admin": false, 00:37:33.130 "nvme_io": false, 00:37:33.130 "nvme_io_md": false, 00:37:33.130 "write_zeroes": true, 00:37:33.130 "zcopy": true, 00:37:33.130 "get_zone_info": false, 00:37:33.130 "zone_management": false, 00:37:33.130 "zone_append": false, 00:37:33.130 "compare": false, 00:37:33.130 "compare_and_write": false, 00:37:33.130 "abort": true, 00:37:33.130 "seek_hole": false, 00:37:33.130 "seek_data": false, 00:37:33.130 "copy": true, 00:37:33.130 "nvme_iov_md": false 00:37:33.130 }, 00:37:33.130 "memory_domains": [ 00:37:33.130 { 00:37:33.130 "dma_device_id": "system", 00:37:33.130 "dma_device_type": 1 00:37:33.130 }, 00:37:33.130 { 00:37:33.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:33.130 "dma_device_type": 2 00:37:33.130 } 00:37:33.130 ], 00:37:33.130 "driver_specific": {} 00:37:33.130 } 00:37:33.130 ] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.130 [2024-12-09 05:28:19.880585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:33.130 [2024-12-09 05:28:19.880659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:33.130 [2024-12-09 05:28:19.880686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:33.130 [2024-12-09 05:28:19.883132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.130 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:33.130 "name": "Existed_Raid", 00:37:33.130 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:33.130 "strip_size_kb": 0, 00:37:33.130 "state": "configuring", 00:37:33.130 "raid_level": "raid1", 00:37:33.130 "superblock": true, 00:37:33.130 "num_base_bdevs": 3, 00:37:33.130 "num_base_bdevs_discovered": 2, 00:37:33.130 "num_base_bdevs_operational": 3, 00:37:33.130 "base_bdevs_list": [ 00:37:33.130 { 00:37:33.130 "name": "BaseBdev1", 00:37:33.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.130 "is_configured": false, 00:37:33.130 "data_offset": 0, 00:37:33.130 "data_size": 0 00:37:33.131 }, 00:37:33.131 { 00:37:33.131 "name": "BaseBdev2", 00:37:33.131 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:33.131 "is_configured": true, 00:37:33.131 "data_offset": 2048, 00:37:33.131 "data_size": 63488 00:37:33.131 }, 00:37:33.131 { 00:37:33.131 "name": "BaseBdev3", 00:37:33.131 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:33.131 "is_configured": true, 00:37:33.131 "data_offset": 2048, 00:37:33.131 "data_size": 63488 00:37:33.131 } 00:37:33.131 ] 00:37:33.131 }' 00:37:33.131 05:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:33.131 05:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.698 [2024-12-09 05:28:20.416750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:33.698 "name": "Existed_Raid", 00:37:33.698 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:33.698 "strip_size_kb": 0, 00:37:33.698 "state": "configuring", 00:37:33.698 "raid_level": "raid1", 00:37:33.698 "superblock": true, 00:37:33.698 "num_base_bdevs": 3, 00:37:33.698 "num_base_bdevs_discovered": 1, 00:37:33.698 "num_base_bdevs_operational": 3, 00:37:33.698 "base_bdevs_list": [ 00:37:33.698 { 00:37:33.698 "name": "BaseBdev1", 00:37:33.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.698 "is_configured": false, 00:37:33.698 "data_offset": 0, 00:37:33.698 "data_size": 0 00:37:33.698 }, 00:37:33.698 { 00:37:33.698 "name": null, 00:37:33.698 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:33.698 "is_configured": false, 00:37:33.698 "data_offset": 0, 00:37:33.698 "data_size": 63488 00:37:33.698 }, 00:37:33.698 { 00:37:33.698 "name": "BaseBdev3", 00:37:33.698 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:33.698 "is_configured": true, 00:37:33.698 "data_offset": 2048, 00:37:33.698 "data_size": 63488 00:37:33.698 } 00:37:33.698 ] 00:37:33.698 }' 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:33.698 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.266 05:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.266 [2024-12-09 05:28:21.036459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:34.266 BaseBdev1 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.266 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.267 [ 00:37:34.267 { 00:37:34.267 "name": "BaseBdev1", 00:37:34.267 "aliases": [ 00:37:34.267 "602dc4ed-c177-46d0-839b-7b91236d77ee" 00:37:34.267 ], 00:37:34.267 "product_name": "Malloc disk", 00:37:34.267 "block_size": 512, 00:37:34.267 "num_blocks": 65536, 00:37:34.267 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:34.267 "assigned_rate_limits": { 00:37:34.267 "rw_ios_per_sec": 0, 00:37:34.267 "rw_mbytes_per_sec": 0, 00:37:34.267 "r_mbytes_per_sec": 0, 00:37:34.267 "w_mbytes_per_sec": 0 00:37:34.267 }, 00:37:34.267 "claimed": true, 00:37:34.267 "claim_type": "exclusive_write", 00:37:34.267 "zoned": false, 00:37:34.267 "supported_io_types": { 00:37:34.267 "read": true, 00:37:34.267 "write": true, 00:37:34.267 "unmap": true, 00:37:34.267 "flush": true, 00:37:34.267 "reset": true, 00:37:34.267 "nvme_admin": false, 00:37:34.267 "nvme_io": false, 00:37:34.267 "nvme_io_md": false, 00:37:34.267 "write_zeroes": true, 00:37:34.267 "zcopy": true, 00:37:34.267 "get_zone_info": false, 00:37:34.267 "zone_management": false, 00:37:34.267 "zone_append": false, 00:37:34.267 "compare": false, 00:37:34.267 "compare_and_write": false, 00:37:34.267 "abort": true, 00:37:34.267 "seek_hole": false, 00:37:34.267 "seek_data": false, 00:37:34.267 "copy": true, 00:37:34.267 "nvme_iov_md": false 00:37:34.267 }, 00:37:34.267 "memory_domains": [ 00:37:34.267 { 00:37:34.267 "dma_device_id": "system", 00:37:34.267 "dma_device_type": 1 00:37:34.267 }, 00:37:34.267 { 00:37:34.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:34.267 "dma_device_type": 2 00:37:34.267 } 00:37:34.267 ], 00:37:34.267 "driver_specific": {} 00:37:34.267 } 00:37:34.267 ] 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:34.267 "name": "Existed_Raid", 00:37:34.267 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:34.267 "strip_size_kb": 0, 00:37:34.267 "state": "configuring", 00:37:34.267 "raid_level": "raid1", 00:37:34.267 "superblock": true, 00:37:34.267 "num_base_bdevs": 3, 00:37:34.267 "num_base_bdevs_discovered": 2, 00:37:34.267 "num_base_bdevs_operational": 3, 00:37:34.267 "base_bdevs_list": [ 00:37:34.267 { 00:37:34.267 "name": "BaseBdev1", 00:37:34.267 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:34.267 "is_configured": true, 00:37:34.267 "data_offset": 2048, 00:37:34.267 "data_size": 63488 00:37:34.267 }, 00:37:34.267 { 00:37:34.267 "name": null, 00:37:34.267 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:34.267 "is_configured": false, 00:37:34.267 "data_offset": 0, 00:37:34.267 "data_size": 63488 00:37:34.267 }, 00:37:34.267 { 00:37:34.267 "name": "BaseBdev3", 00:37:34.267 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:34.267 "is_configured": true, 00:37:34.267 "data_offset": 2048, 00:37:34.267 "data_size": 63488 00:37:34.267 } 00:37:34.267 ] 00:37:34.267 }' 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:34.267 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.835 [2024-12-09 05:28:21.668603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:34.835 "name": "Existed_Raid", 00:37:34.835 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:34.835 "strip_size_kb": 0, 00:37:34.835 "state": "configuring", 00:37:34.835 "raid_level": "raid1", 00:37:34.835 "superblock": true, 00:37:34.835 "num_base_bdevs": 3, 00:37:34.835 "num_base_bdevs_discovered": 1, 00:37:34.835 "num_base_bdevs_operational": 3, 00:37:34.835 "base_bdevs_list": [ 00:37:34.835 { 00:37:34.835 "name": "BaseBdev1", 00:37:34.835 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:34.835 "is_configured": true, 00:37:34.835 "data_offset": 2048, 00:37:34.835 "data_size": 63488 00:37:34.835 }, 00:37:34.835 { 00:37:34.835 "name": null, 00:37:34.835 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:34.835 "is_configured": false, 00:37:34.835 "data_offset": 0, 00:37:34.835 "data_size": 63488 00:37:34.835 }, 00:37:34.835 { 00:37:34.835 "name": null, 00:37:34.835 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:34.835 "is_configured": false, 00:37:34.835 "data_offset": 0, 00:37:34.835 "data_size": 63488 00:37:34.835 } 00:37:34.835 ] 00:37:34.835 }' 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:34.835 05:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.404 [2024-12-09 05:28:22.252876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:35.404 "name": "Existed_Raid", 00:37:35.404 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:35.404 "strip_size_kb": 0, 00:37:35.404 "state": "configuring", 00:37:35.404 "raid_level": "raid1", 00:37:35.404 "superblock": true, 00:37:35.404 "num_base_bdevs": 3, 00:37:35.404 "num_base_bdevs_discovered": 2, 00:37:35.404 "num_base_bdevs_operational": 3, 00:37:35.404 "base_bdevs_list": [ 00:37:35.404 { 00:37:35.404 "name": "BaseBdev1", 00:37:35.404 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:35.404 "is_configured": true, 00:37:35.404 "data_offset": 2048, 00:37:35.404 "data_size": 63488 00:37:35.404 }, 00:37:35.404 { 00:37:35.404 "name": null, 00:37:35.404 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:35.404 "is_configured": false, 00:37:35.404 "data_offset": 0, 00:37:35.404 "data_size": 63488 00:37:35.404 }, 00:37:35.404 { 00:37:35.404 "name": "BaseBdev3", 00:37:35.404 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:35.404 "is_configured": true, 00:37:35.404 "data_offset": 2048, 00:37:35.404 "data_size": 63488 00:37:35.404 } 00:37:35.404 ] 00:37:35.404 }' 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:35.404 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:35.972 [2024-12-09 05:28:22.840978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.972 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:36.232 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.232 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.232 "name": "Existed_Raid", 00:37:36.232 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:36.232 "strip_size_kb": 0, 00:37:36.232 "state": "configuring", 00:37:36.232 "raid_level": "raid1", 00:37:36.232 "superblock": true, 00:37:36.232 "num_base_bdevs": 3, 00:37:36.232 "num_base_bdevs_discovered": 1, 00:37:36.232 "num_base_bdevs_operational": 3, 00:37:36.232 "base_bdevs_list": [ 00:37:36.232 { 00:37:36.232 "name": null, 00:37:36.232 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:36.232 "is_configured": false, 00:37:36.232 "data_offset": 0, 00:37:36.232 "data_size": 63488 00:37:36.232 }, 00:37:36.232 { 00:37:36.232 "name": null, 00:37:36.232 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:36.232 "is_configured": false, 00:37:36.232 "data_offset": 0, 00:37:36.232 "data_size": 63488 00:37:36.232 }, 00:37:36.232 { 00:37:36.232 "name": "BaseBdev3", 00:37:36.232 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:36.232 "is_configured": true, 00:37:36.232 "data_offset": 2048, 00:37:36.232 "data_size": 63488 00:37:36.232 } 00:37:36.232 ] 00:37:36.232 }' 00:37:36.232 05:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.232 05:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:36.491 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.491 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:36.491 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.491 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:36.491 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:36.751 [2024-12-09 05:28:23.485086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.751 "name": "Existed_Raid", 00:37:36.751 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:36.751 "strip_size_kb": 0, 00:37:36.751 "state": "configuring", 00:37:36.751 "raid_level": "raid1", 00:37:36.751 "superblock": true, 00:37:36.751 "num_base_bdevs": 3, 00:37:36.751 "num_base_bdevs_discovered": 2, 00:37:36.751 "num_base_bdevs_operational": 3, 00:37:36.751 "base_bdevs_list": [ 00:37:36.751 { 00:37:36.751 "name": null, 00:37:36.751 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:36.751 "is_configured": false, 00:37:36.751 "data_offset": 0, 00:37:36.751 "data_size": 63488 00:37:36.751 }, 00:37:36.751 { 00:37:36.751 "name": "BaseBdev2", 00:37:36.751 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:36.751 "is_configured": true, 00:37:36.751 "data_offset": 2048, 00:37:36.751 "data_size": 63488 00:37:36.751 }, 00:37:36.751 { 00:37:36.751 "name": "BaseBdev3", 00:37:36.751 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:36.751 "is_configured": true, 00:37:36.751 "data_offset": 2048, 00:37:36.751 "data_size": 63488 00:37:36.751 } 00:37:36.751 ] 00:37:36.751 }' 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.751 05:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:37.319 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 602dc4ed-c177-46d0-839b-7b91236d77ee 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.320 [2024-12-09 05:28:24.202164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:37.320 [2024-12-09 05:28:24.202705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:37.320 NewBaseBdev 00:37:37.320 [2024-12-09 05:28:24.202897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:37.320 [2024-12-09 05:28:24.203259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:37.320 [2024-12-09 05:28:24.203448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:37.320 [2024-12-09 05:28:24.203469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:37:37.320 [2024-12-09 05:28:24.203626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.320 [ 00:37:37.320 { 00:37:37.320 "name": "NewBaseBdev", 00:37:37.320 "aliases": [ 00:37:37.320 "602dc4ed-c177-46d0-839b-7b91236d77ee" 00:37:37.320 ], 00:37:37.320 "product_name": "Malloc disk", 00:37:37.320 "block_size": 512, 00:37:37.320 "num_blocks": 65536, 00:37:37.320 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:37.320 "assigned_rate_limits": { 00:37:37.320 "rw_ios_per_sec": 0, 00:37:37.320 "rw_mbytes_per_sec": 0, 00:37:37.320 "r_mbytes_per_sec": 0, 00:37:37.320 "w_mbytes_per_sec": 0 00:37:37.320 }, 00:37:37.320 "claimed": true, 00:37:37.320 "claim_type": "exclusive_write", 00:37:37.320 "zoned": false, 00:37:37.320 "supported_io_types": { 00:37:37.320 "read": true, 00:37:37.320 "write": true, 00:37:37.320 "unmap": true, 00:37:37.320 "flush": true, 00:37:37.320 "reset": true, 00:37:37.320 "nvme_admin": false, 00:37:37.320 "nvme_io": false, 00:37:37.320 "nvme_io_md": false, 00:37:37.320 "write_zeroes": true, 00:37:37.320 "zcopy": true, 00:37:37.320 "get_zone_info": false, 00:37:37.320 "zone_management": false, 00:37:37.320 "zone_append": false, 00:37:37.320 "compare": false, 00:37:37.320 "compare_and_write": false, 00:37:37.320 "abort": true, 00:37:37.320 "seek_hole": false, 00:37:37.320 "seek_data": false, 00:37:37.320 "copy": true, 00:37:37.320 "nvme_iov_md": false 00:37:37.320 }, 00:37:37.320 "memory_domains": [ 00:37:37.320 { 00:37:37.320 "dma_device_id": "system", 00:37:37.320 "dma_device_type": 1 00:37:37.320 }, 00:37:37.320 { 00:37:37.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:37.320 "dma_device_type": 2 00:37:37.320 } 00:37:37.320 ], 00:37:37.320 "driver_specific": {} 00:37:37.320 } 00:37:37.320 ] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:37.320 "name": "Existed_Raid", 00:37:37.320 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:37.320 "strip_size_kb": 0, 00:37:37.320 "state": "online", 00:37:37.320 "raid_level": "raid1", 00:37:37.320 "superblock": true, 00:37:37.320 "num_base_bdevs": 3, 00:37:37.320 "num_base_bdevs_discovered": 3, 00:37:37.320 "num_base_bdevs_operational": 3, 00:37:37.320 "base_bdevs_list": [ 00:37:37.320 { 00:37:37.320 "name": "NewBaseBdev", 00:37:37.320 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:37.320 "is_configured": true, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 }, 00:37:37.320 { 00:37:37.320 "name": "BaseBdev2", 00:37:37.320 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:37.320 "is_configured": true, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 }, 00:37:37.320 { 00:37:37.320 "name": "BaseBdev3", 00:37:37.320 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:37.320 "is_configured": true, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 } 00:37:37.320 ] 00:37:37.320 }' 00:37:37.320 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:37.321 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:37.888 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:37.889 [2024-12-09 05:28:24.758754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:37.889 "name": "Existed_Raid", 00:37:37.889 "aliases": [ 00:37:37.889 "00b0efae-1289-40d9-8443-340f7c4c0910" 00:37:37.889 ], 00:37:37.889 "product_name": "Raid Volume", 00:37:37.889 "block_size": 512, 00:37:37.889 "num_blocks": 63488, 00:37:37.889 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:37.889 "assigned_rate_limits": { 00:37:37.889 "rw_ios_per_sec": 0, 00:37:37.889 "rw_mbytes_per_sec": 0, 00:37:37.889 "r_mbytes_per_sec": 0, 00:37:37.889 "w_mbytes_per_sec": 0 00:37:37.889 }, 00:37:37.889 "claimed": false, 00:37:37.889 "zoned": false, 00:37:37.889 "supported_io_types": { 00:37:37.889 "read": true, 00:37:37.889 "write": true, 00:37:37.889 "unmap": false, 00:37:37.889 "flush": false, 00:37:37.889 "reset": true, 00:37:37.889 "nvme_admin": false, 00:37:37.889 "nvme_io": false, 00:37:37.889 "nvme_io_md": false, 00:37:37.889 "write_zeroes": true, 00:37:37.889 "zcopy": false, 00:37:37.889 "get_zone_info": false, 00:37:37.889 "zone_management": false, 00:37:37.889 "zone_append": false, 00:37:37.889 "compare": false, 00:37:37.889 "compare_and_write": false, 00:37:37.889 "abort": false, 00:37:37.889 "seek_hole": false, 00:37:37.889 "seek_data": false, 00:37:37.889 "copy": false, 00:37:37.889 "nvme_iov_md": false 00:37:37.889 }, 00:37:37.889 "memory_domains": [ 00:37:37.889 { 00:37:37.889 "dma_device_id": "system", 00:37:37.889 "dma_device_type": 1 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:37.889 "dma_device_type": 2 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "dma_device_id": "system", 00:37:37.889 "dma_device_type": 1 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:37.889 "dma_device_type": 2 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "dma_device_id": "system", 00:37:37.889 "dma_device_type": 1 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:37.889 "dma_device_type": 2 00:37:37.889 } 00:37:37.889 ], 00:37:37.889 "driver_specific": { 00:37:37.889 "raid": { 00:37:37.889 "uuid": "00b0efae-1289-40d9-8443-340f7c4c0910", 00:37:37.889 "strip_size_kb": 0, 00:37:37.889 "state": "online", 00:37:37.889 "raid_level": "raid1", 00:37:37.889 "superblock": true, 00:37:37.889 "num_base_bdevs": 3, 00:37:37.889 "num_base_bdevs_discovered": 3, 00:37:37.889 "num_base_bdevs_operational": 3, 00:37:37.889 "base_bdevs_list": [ 00:37:37.889 { 00:37:37.889 "name": "NewBaseBdev", 00:37:37.889 "uuid": "602dc4ed-c177-46d0-839b-7b91236d77ee", 00:37:37.889 "is_configured": true, 00:37:37.889 "data_offset": 2048, 00:37:37.889 "data_size": 63488 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "name": "BaseBdev2", 00:37:37.889 "uuid": "36f8e0b4-8430-4485-a115-1395ac14534f", 00:37:37.889 "is_configured": true, 00:37:37.889 "data_offset": 2048, 00:37:37.889 "data_size": 63488 00:37:37.889 }, 00:37:37.889 { 00:37:37.889 "name": "BaseBdev3", 00:37:37.889 "uuid": "81f3cc91-acd0-4eb6-88d6-9a9719d5757c", 00:37:37.889 "is_configured": true, 00:37:37.889 "data_offset": 2048, 00:37:37.889 "data_size": 63488 00:37:37.889 } 00:37:37.889 ] 00:37:37.889 } 00:37:37.889 } 00:37:37.889 }' 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:37.889 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:37.889 BaseBdev2 00:37:37.889 BaseBdev3' 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:38.148 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:38.148 [2024-12-09 05:28:25.086393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:38.148 [2024-12-09 05:28:25.086596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:38.148 [2024-12-09 05:28:25.086688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:38.148 [2024-12-09 05:28:25.087088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:38.148 [2024-12-09 05:28:25.087121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68135 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68135 ']' 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68135 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:38.148 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68135 00:37:38.407 killing process with pid 68135 00:37:38.407 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:38.407 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:38.407 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68135' 00:37:38.407 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68135 00:37:38.407 [2024-12-09 05:28:25.125613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:38.407 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68135 00:37:38.665 [2024-12-09 05:28:25.379353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:39.600 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:37:39.600 00:37:39.600 real 0m11.938s 00:37:39.600 user 0m19.651s 00:37:39.600 sys 0m1.800s 00:37:39.600 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.600 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 ************************************ 00:37:39.600 END TEST raid_state_function_test_sb 00:37:39.600 ************************************ 00:37:39.600 05:28:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:37:39.600 05:28:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:39.600 05:28:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.600 05:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 ************************************ 00:37:39.600 START TEST raid_superblock_test 00:37:39.600 ************************************ 00:37:39.600 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:37:39.600 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:37:39.858 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:37:39.858 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:37:39.858 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:37:39.858 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68772 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68772 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68772 ']' 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:39.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:39.859 05:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.859 [2024-12-09 05:28:26.678409] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:39.859 [2024-12-09 05:28:26.678909] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68772 ] 00:37:40.117 [2024-12-09 05:28:26.865345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.117 [2024-12-09 05:28:27.002462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.375 [2024-12-09 05:28:27.208398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:40.375 [2024-12-09 05:28:27.208469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 malloc1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 [2024-12-09 05:28:27.700380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:41.012 [2024-12-09 05:28:27.700720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:41.012 [2024-12-09 05:28:27.700761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:41.012 [2024-12-09 05:28:27.700776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:41.012 [2024-12-09 05:28:27.703552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:41.012 [2024-12-09 05:28:27.703593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:41.012 pt1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 malloc2 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 [2024-12-09 05:28:27.753468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:41.012 [2024-12-09 05:28:27.753565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:41.012 [2024-12-09 05:28:27.753601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:41.012 [2024-12-09 05:28:27.753614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:41.012 [2024-12-09 05:28:27.756331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:41.012 [2024-12-09 05:28:27.756373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:41.012 pt2 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 malloc3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 [2024-12-09 05:28:27.815153] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:41.012 [2024-12-09 05:28:27.815242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:41.012 [2024-12-09 05:28:27.815276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:41.012 [2024-12-09 05:28:27.815290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:41.012 [2024-12-09 05:28:27.817847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:41.012 [2024-12-09 05:28:27.817888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:41.012 pt3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.012 [2024-12-09 05:28:27.823202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:41.012 [2024-12-09 05:28:27.825412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:41.012 [2024-12-09 05:28:27.825651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:41.012 [2024-12-09 05:28:27.825875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:41.012 [2024-12-09 05:28:27.825903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:41.012 [2024-12-09 05:28:27.826294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:41.012 [2024-12-09 05:28:27.826527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:41.012 [2024-12-09 05:28:27.826547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:41.012 [2024-12-09 05:28:27.826745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:41.012 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:41.013 "name": "raid_bdev1", 00:37:41.013 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:41.013 "strip_size_kb": 0, 00:37:41.013 "state": "online", 00:37:41.013 "raid_level": "raid1", 00:37:41.013 "superblock": true, 00:37:41.013 "num_base_bdevs": 3, 00:37:41.013 "num_base_bdevs_discovered": 3, 00:37:41.013 "num_base_bdevs_operational": 3, 00:37:41.013 "base_bdevs_list": [ 00:37:41.013 { 00:37:41.013 "name": "pt1", 00:37:41.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:41.013 "is_configured": true, 00:37:41.013 "data_offset": 2048, 00:37:41.013 "data_size": 63488 00:37:41.013 }, 00:37:41.013 { 00:37:41.013 "name": "pt2", 00:37:41.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:41.013 "is_configured": true, 00:37:41.013 "data_offset": 2048, 00:37:41.013 "data_size": 63488 00:37:41.013 }, 00:37:41.013 { 00:37:41.013 "name": "pt3", 00:37:41.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:41.013 "is_configured": true, 00:37:41.013 "data_offset": 2048, 00:37:41.013 "data_size": 63488 00:37:41.013 } 00:37:41.013 ] 00:37:41.013 }' 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:41.013 05:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.579 [2024-12-09 05:28:28.339852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.579 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:41.579 "name": "raid_bdev1", 00:37:41.579 "aliases": [ 00:37:41.579 "3bf1f031-964c-40c5-8834-c4d620cf1c38" 00:37:41.579 ], 00:37:41.579 "product_name": "Raid Volume", 00:37:41.579 "block_size": 512, 00:37:41.579 "num_blocks": 63488, 00:37:41.579 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:41.579 "assigned_rate_limits": { 00:37:41.579 "rw_ios_per_sec": 0, 00:37:41.579 "rw_mbytes_per_sec": 0, 00:37:41.579 "r_mbytes_per_sec": 0, 00:37:41.579 "w_mbytes_per_sec": 0 00:37:41.579 }, 00:37:41.579 "claimed": false, 00:37:41.579 "zoned": false, 00:37:41.579 "supported_io_types": { 00:37:41.579 "read": true, 00:37:41.579 "write": true, 00:37:41.579 "unmap": false, 00:37:41.579 "flush": false, 00:37:41.579 "reset": true, 00:37:41.579 "nvme_admin": false, 00:37:41.579 "nvme_io": false, 00:37:41.579 "nvme_io_md": false, 00:37:41.579 "write_zeroes": true, 00:37:41.579 "zcopy": false, 00:37:41.579 "get_zone_info": false, 00:37:41.579 "zone_management": false, 00:37:41.579 "zone_append": false, 00:37:41.579 "compare": false, 00:37:41.579 "compare_and_write": false, 00:37:41.579 "abort": false, 00:37:41.579 "seek_hole": false, 00:37:41.579 "seek_data": false, 00:37:41.579 "copy": false, 00:37:41.579 "nvme_iov_md": false 00:37:41.579 }, 00:37:41.579 "memory_domains": [ 00:37:41.579 { 00:37:41.579 "dma_device_id": "system", 00:37:41.579 "dma_device_type": 1 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:41.579 "dma_device_type": 2 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "dma_device_id": "system", 00:37:41.579 "dma_device_type": 1 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:41.579 "dma_device_type": 2 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "dma_device_id": "system", 00:37:41.579 "dma_device_type": 1 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:41.579 "dma_device_type": 2 00:37:41.579 } 00:37:41.579 ], 00:37:41.579 "driver_specific": { 00:37:41.579 "raid": { 00:37:41.579 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:41.579 "strip_size_kb": 0, 00:37:41.579 "state": "online", 00:37:41.579 "raid_level": "raid1", 00:37:41.579 "superblock": true, 00:37:41.579 "num_base_bdevs": 3, 00:37:41.579 "num_base_bdevs_discovered": 3, 00:37:41.579 "num_base_bdevs_operational": 3, 00:37:41.579 "base_bdevs_list": [ 00:37:41.579 { 00:37:41.579 "name": "pt1", 00:37:41.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:41.579 "is_configured": true, 00:37:41.579 "data_offset": 2048, 00:37:41.579 "data_size": 63488 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "name": "pt2", 00:37:41.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:41.579 "is_configured": true, 00:37:41.579 "data_offset": 2048, 00:37:41.579 "data_size": 63488 00:37:41.579 }, 00:37:41.579 { 00:37:41.579 "name": "pt3", 00:37:41.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:41.579 "is_configured": true, 00:37:41.579 "data_offset": 2048, 00:37:41.579 "data_size": 63488 00:37:41.579 } 00:37:41.579 ] 00:37:41.579 } 00:37:41.580 } 00:37:41.580 }' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:41.580 pt2 00:37:41.580 pt3' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.580 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 [2024-12-09 05:28:28.655746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3bf1f031-964c-40c5-8834-c4d620cf1c38 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3bf1f031-964c-40c5-8834-c4d620cf1c38 ']' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 [2024-12-09 05:28:28.703476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:41.837 [2024-12-09 05:28:28.703504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:41.837 [2024-12-09 05:28:28.703585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:41.837 [2024-12-09 05:28:28.703677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:41.837 [2024-12-09 05:28:28.703692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.837 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.094 [2024-12-09 05:28:28.859591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:42.094 [2024-12-09 05:28:28.862229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:42.094 [2024-12-09 05:28:28.862309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:42.094 [2024-12-09 05:28:28.862383] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:42.094 [2024-12-09 05:28:28.862457] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:42.094 [2024-12-09 05:28:28.862493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:37:42.094 [2024-12-09 05:28:28.862522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:42.094 [2024-12-09 05:28:28.862536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:37:42.094 request: 00:37:42.094 { 00:37:42.094 "name": "raid_bdev1", 00:37:42.094 "raid_level": "raid1", 00:37:42.094 "base_bdevs": [ 00:37:42.094 "malloc1", 00:37:42.094 "malloc2", 00:37:42.094 "malloc3" 00:37:42.094 ], 00:37:42.094 "superblock": false, 00:37:42.094 "method": "bdev_raid_create", 00:37:42.094 "req_id": 1 00:37:42.094 } 00:37:42.094 Got JSON-RPC error response 00:37:42.094 response: 00:37:42.094 { 00:37:42.094 "code": -17, 00:37:42.094 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:42.094 } 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.094 [2024-12-09 05:28:28.931541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:42.094 [2024-12-09 05:28:28.931733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.094 [2024-12-09 05:28:28.931849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:42.094 [2024-12-09 05:28:28.931950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.094 [2024-12-09 05:28:28.935108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.094 [2024-12-09 05:28:28.935291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:42.094 [2024-12-09 05:28:28.935536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:42.094 [2024-12-09 05:28:28.935605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:42.094 pt1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:42.094 "name": "raid_bdev1", 00:37:42.094 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:42.094 "strip_size_kb": 0, 00:37:42.094 "state": "configuring", 00:37:42.094 "raid_level": "raid1", 00:37:42.094 "superblock": true, 00:37:42.094 "num_base_bdevs": 3, 00:37:42.094 "num_base_bdevs_discovered": 1, 00:37:42.094 "num_base_bdevs_operational": 3, 00:37:42.094 "base_bdevs_list": [ 00:37:42.094 { 00:37:42.094 "name": "pt1", 00:37:42.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:42.094 "is_configured": true, 00:37:42.094 "data_offset": 2048, 00:37:42.094 "data_size": 63488 00:37:42.094 }, 00:37:42.094 { 00:37:42.094 "name": null, 00:37:42.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:42.094 "is_configured": false, 00:37:42.094 "data_offset": 2048, 00:37:42.094 "data_size": 63488 00:37:42.094 }, 00:37:42.094 { 00:37:42.094 "name": null, 00:37:42.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:42.094 "is_configured": false, 00:37:42.094 "data_offset": 2048, 00:37:42.094 "data_size": 63488 00:37:42.094 } 00:37:42.094 ] 00:37:42.094 }' 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:42.094 05:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.659 [2024-12-09 05:28:29.471707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:42.659 [2024-12-09 05:28:29.471792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.659 [2024-12-09 05:28:29.471825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:42.659 [2024-12-09 05:28:29.471838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.659 [2024-12-09 05:28:29.472291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.659 [2024-12-09 05:28:29.472332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:42.659 [2024-12-09 05:28:29.472418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:42.659 [2024-12-09 05:28:29.472447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:42.659 pt2 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.659 [2024-12-09 05:28:29.479759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:42.659 "name": "raid_bdev1", 00:37:42.659 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:42.659 "strip_size_kb": 0, 00:37:42.659 "state": "configuring", 00:37:42.659 "raid_level": "raid1", 00:37:42.659 "superblock": true, 00:37:42.659 "num_base_bdevs": 3, 00:37:42.659 "num_base_bdevs_discovered": 1, 00:37:42.659 "num_base_bdevs_operational": 3, 00:37:42.659 "base_bdevs_list": [ 00:37:42.659 { 00:37:42.659 "name": "pt1", 00:37:42.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:42.659 "is_configured": true, 00:37:42.659 "data_offset": 2048, 00:37:42.659 "data_size": 63488 00:37:42.659 }, 00:37:42.659 { 00:37:42.659 "name": null, 00:37:42.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:42.659 "is_configured": false, 00:37:42.659 "data_offset": 0, 00:37:42.659 "data_size": 63488 00:37:42.659 }, 00:37:42.659 { 00:37:42.659 "name": null, 00:37:42.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:42.659 "is_configured": false, 00:37:42.659 "data_offset": 2048, 00:37:42.659 "data_size": 63488 00:37:42.659 } 00:37:42.659 ] 00:37:42.659 }' 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:42.659 05:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.225 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:37:43.225 05:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.225 [2024-12-09 05:28:30.003878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:43.225 [2024-12-09 05:28:30.003985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:43.225 [2024-12-09 05:28:30.004016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:43.225 [2024-12-09 05:28:30.004034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:43.225 [2024-12-09 05:28:30.004625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:43.225 [2024-12-09 05:28:30.004655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:43.225 [2024-12-09 05:28:30.004757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:43.225 [2024-12-09 05:28:30.004827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:43.225 pt2 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.225 [2024-12-09 05:28:30.011859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:43.225 [2024-12-09 05:28:30.011916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:43.225 [2024-12-09 05:28:30.011938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:43.225 [2024-12-09 05:28:30.011954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:43.225 [2024-12-09 05:28:30.012398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:43.225 [2024-12-09 05:28:30.012438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:43.225 [2024-12-09 05:28:30.012530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:43.225 [2024-12-09 05:28:30.012564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:43.225 [2024-12-09 05:28:30.012723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:43.225 [2024-12-09 05:28:30.012754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:43.225 [2024-12-09 05:28:30.013078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:43.225 [2024-12-09 05:28:30.013280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:43.225 [2024-12-09 05:28:30.013301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:37:43.225 [2024-12-09 05:28:30.013488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:43.225 pt3 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:43.225 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:43.226 "name": "raid_bdev1", 00:37:43.226 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:43.226 "strip_size_kb": 0, 00:37:43.226 "state": "online", 00:37:43.226 "raid_level": "raid1", 00:37:43.226 "superblock": true, 00:37:43.226 "num_base_bdevs": 3, 00:37:43.226 "num_base_bdevs_discovered": 3, 00:37:43.226 "num_base_bdevs_operational": 3, 00:37:43.226 "base_bdevs_list": [ 00:37:43.226 { 00:37:43.226 "name": "pt1", 00:37:43.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:43.226 "is_configured": true, 00:37:43.226 "data_offset": 2048, 00:37:43.226 "data_size": 63488 00:37:43.226 }, 00:37:43.226 { 00:37:43.226 "name": "pt2", 00:37:43.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:43.226 "is_configured": true, 00:37:43.226 "data_offset": 2048, 00:37:43.226 "data_size": 63488 00:37:43.226 }, 00:37:43.226 { 00:37:43.226 "name": "pt3", 00:37:43.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:43.226 "is_configured": true, 00:37:43.226 "data_offset": 2048, 00:37:43.226 "data_size": 63488 00:37:43.226 } 00:37:43.226 ] 00:37:43.226 }' 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:43.226 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.791 [2024-12-09 05:28:30.544362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.791 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:43.791 "name": "raid_bdev1", 00:37:43.791 "aliases": [ 00:37:43.791 "3bf1f031-964c-40c5-8834-c4d620cf1c38" 00:37:43.791 ], 00:37:43.791 "product_name": "Raid Volume", 00:37:43.791 "block_size": 512, 00:37:43.791 "num_blocks": 63488, 00:37:43.791 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:43.791 "assigned_rate_limits": { 00:37:43.791 "rw_ios_per_sec": 0, 00:37:43.791 "rw_mbytes_per_sec": 0, 00:37:43.791 "r_mbytes_per_sec": 0, 00:37:43.791 "w_mbytes_per_sec": 0 00:37:43.791 }, 00:37:43.791 "claimed": false, 00:37:43.791 "zoned": false, 00:37:43.791 "supported_io_types": { 00:37:43.791 "read": true, 00:37:43.791 "write": true, 00:37:43.791 "unmap": false, 00:37:43.791 "flush": false, 00:37:43.791 "reset": true, 00:37:43.791 "nvme_admin": false, 00:37:43.791 "nvme_io": false, 00:37:43.791 "nvme_io_md": false, 00:37:43.792 "write_zeroes": true, 00:37:43.792 "zcopy": false, 00:37:43.792 "get_zone_info": false, 00:37:43.792 "zone_management": false, 00:37:43.792 "zone_append": false, 00:37:43.792 "compare": false, 00:37:43.792 "compare_and_write": false, 00:37:43.792 "abort": false, 00:37:43.792 "seek_hole": false, 00:37:43.792 "seek_data": false, 00:37:43.792 "copy": false, 00:37:43.792 "nvme_iov_md": false 00:37:43.792 }, 00:37:43.792 "memory_domains": [ 00:37:43.792 { 00:37:43.792 "dma_device_id": "system", 00:37:43.792 "dma_device_type": 1 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:43.792 "dma_device_type": 2 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "dma_device_id": "system", 00:37:43.792 "dma_device_type": 1 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:43.792 "dma_device_type": 2 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "dma_device_id": "system", 00:37:43.792 "dma_device_type": 1 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:43.792 "dma_device_type": 2 00:37:43.792 } 00:37:43.792 ], 00:37:43.792 "driver_specific": { 00:37:43.792 "raid": { 00:37:43.792 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:43.792 "strip_size_kb": 0, 00:37:43.792 "state": "online", 00:37:43.792 "raid_level": "raid1", 00:37:43.792 "superblock": true, 00:37:43.792 "num_base_bdevs": 3, 00:37:43.792 "num_base_bdevs_discovered": 3, 00:37:43.792 "num_base_bdevs_operational": 3, 00:37:43.792 "base_bdevs_list": [ 00:37:43.792 { 00:37:43.792 "name": "pt1", 00:37:43.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:43.792 "is_configured": true, 00:37:43.792 "data_offset": 2048, 00:37:43.792 "data_size": 63488 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "name": "pt2", 00:37:43.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:43.792 "is_configured": true, 00:37:43.792 "data_offset": 2048, 00:37:43.792 "data_size": 63488 00:37:43.792 }, 00:37:43.792 { 00:37:43.792 "name": "pt3", 00:37:43.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:43.792 "is_configured": true, 00:37:43.792 "data_offset": 2048, 00:37:43.792 "data_size": 63488 00:37:43.792 } 00:37:43.792 ] 00:37:43.792 } 00:37:43.792 } 00:37:43.792 }' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:43.792 pt2 00:37:43.792 pt3' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.792 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:37:44.050 [2024-12-09 05:28:30.864415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3bf1f031-964c-40c5-8834-c4d620cf1c38 '!=' 3bf1f031-964c-40c5-8834-c4d620cf1c38 ']' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.050 [2024-12-09 05:28:30.920205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:44.050 "name": "raid_bdev1", 00:37:44.050 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:44.050 "strip_size_kb": 0, 00:37:44.050 "state": "online", 00:37:44.050 "raid_level": "raid1", 00:37:44.050 "superblock": true, 00:37:44.050 "num_base_bdevs": 3, 00:37:44.050 "num_base_bdevs_discovered": 2, 00:37:44.050 "num_base_bdevs_operational": 2, 00:37:44.050 "base_bdevs_list": [ 00:37:44.050 { 00:37:44.050 "name": null, 00:37:44.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.050 "is_configured": false, 00:37:44.050 "data_offset": 0, 00:37:44.050 "data_size": 63488 00:37:44.050 }, 00:37:44.050 { 00:37:44.050 "name": "pt2", 00:37:44.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:44.050 "is_configured": true, 00:37:44.050 "data_offset": 2048, 00:37:44.050 "data_size": 63488 00:37:44.050 }, 00:37:44.050 { 00:37:44.050 "name": "pt3", 00:37:44.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:44.050 "is_configured": true, 00:37:44.050 "data_offset": 2048, 00:37:44.050 "data_size": 63488 00:37:44.050 } 00:37:44.050 ] 00:37:44.050 }' 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:44.050 05:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 [2024-12-09 05:28:31.440337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:44.617 [2024-12-09 05:28:31.440373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:44.617 [2024-12-09 05:28:31.440528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:44.617 [2024-12-09 05:28:31.440604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:44.617 [2024-12-09 05:28:31.440626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 [2024-12-09 05:28:31.524298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:44.617 [2024-12-09 05:28:31.524414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:44.617 [2024-12-09 05:28:31.524453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:37:44.617 [2024-12-09 05:28:31.524468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:44.617 [2024-12-09 05:28:31.527300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:44.617 [2024-12-09 05:28:31.527362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:44.617 [2024-12-09 05:28:31.527466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:44.617 [2024-12-09 05:28:31.527525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:44.617 pt2 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.617 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:44.617 "name": "raid_bdev1", 00:37:44.617 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:44.617 "strip_size_kb": 0, 00:37:44.617 "state": "configuring", 00:37:44.618 "raid_level": "raid1", 00:37:44.618 "superblock": true, 00:37:44.618 "num_base_bdevs": 3, 00:37:44.618 "num_base_bdevs_discovered": 1, 00:37:44.618 "num_base_bdevs_operational": 2, 00:37:44.618 "base_bdevs_list": [ 00:37:44.618 { 00:37:44.618 "name": null, 00:37:44.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.618 "is_configured": false, 00:37:44.618 "data_offset": 2048, 00:37:44.618 "data_size": 63488 00:37:44.618 }, 00:37:44.618 { 00:37:44.618 "name": "pt2", 00:37:44.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:44.618 "is_configured": true, 00:37:44.618 "data_offset": 2048, 00:37:44.618 "data_size": 63488 00:37:44.618 }, 00:37:44.618 { 00:37:44.618 "name": null, 00:37:44.618 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:44.618 "is_configured": false, 00:37:44.618 "data_offset": 2048, 00:37:44.618 "data_size": 63488 00:37:44.618 } 00:37:44.618 ] 00:37:44.618 }' 00:37:44.618 05:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:44.618 05:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.185 [2024-12-09 05:28:32.068488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:45.185 [2024-12-09 05:28:32.068567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:45.185 [2024-12-09 05:28:32.068591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:45.185 [2024-12-09 05:28:32.068606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:45.185 [2024-12-09 05:28:32.069220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:45.185 [2024-12-09 05:28:32.069265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:45.185 [2024-12-09 05:28:32.069360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:45.185 [2024-12-09 05:28:32.069399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:45.185 [2024-12-09 05:28:32.069545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:45.185 [2024-12-09 05:28:32.069564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:45.185 [2024-12-09 05:28:32.069981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:45.185 [2024-12-09 05:28:32.070373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:45.185 [2024-12-09 05:28:32.070397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:45.185 [2024-12-09 05:28:32.070610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:45.185 pt3 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.185 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:45.185 "name": "raid_bdev1", 00:37:45.185 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:45.185 "strip_size_kb": 0, 00:37:45.185 "state": "online", 00:37:45.185 "raid_level": "raid1", 00:37:45.185 "superblock": true, 00:37:45.185 "num_base_bdevs": 3, 00:37:45.186 "num_base_bdevs_discovered": 2, 00:37:45.186 "num_base_bdevs_operational": 2, 00:37:45.186 "base_bdevs_list": [ 00:37:45.186 { 00:37:45.186 "name": null, 00:37:45.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.186 "is_configured": false, 00:37:45.186 "data_offset": 2048, 00:37:45.186 "data_size": 63488 00:37:45.186 }, 00:37:45.186 { 00:37:45.186 "name": "pt2", 00:37:45.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:45.186 "is_configured": true, 00:37:45.186 "data_offset": 2048, 00:37:45.186 "data_size": 63488 00:37:45.186 }, 00:37:45.186 { 00:37:45.186 "name": "pt3", 00:37:45.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:45.186 "is_configured": true, 00:37:45.186 "data_offset": 2048, 00:37:45.186 "data_size": 63488 00:37:45.186 } 00:37:45.186 ] 00:37:45.186 }' 00:37:45.186 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:45.186 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.754 [2024-12-09 05:28:32.600557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:45.754 [2024-12-09 05:28:32.600587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:45.754 [2024-12-09 05:28:32.600648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:45.754 [2024-12-09 05:28:32.600713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:45.754 [2024-12-09 05:28:32.600726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.754 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.755 [2024-12-09 05:28:32.672634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:45.755 [2024-12-09 05:28:32.672705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:45.755 [2024-12-09 05:28:32.672732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:45.755 [2024-12-09 05:28:32.672744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:45.755 [2024-12-09 05:28:32.675783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:45.755 [2024-12-09 05:28:32.675848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:45.755 [2024-12-09 05:28:32.675931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:45.755 [2024-12-09 05:28:32.675986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:45.755 [2024-12-09 05:28:32.676139] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:45.755 [2024-12-09 05:28:32.676170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:45.755 [2024-12-09 05:28:32.676189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:37:45.755 [2024-12-09 05:28:32.676251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:45.755 pt1 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.755 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.014 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:46.014 "name": "raid_bdev1", 00:37:46.014 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:46.014 "strip_size_kb": 0, 00:37:46.014 "state": "configuring", 00:37:46.014 "raid_level": "raid1", 00:37:46.014 "superblock": true, 00:37:46.014 "num_base_bdevs": 3, 00:37:46.014 "num_base_bdevs_discovered": 1, 00:37:46.014 "num_base_bdevs_operational": 2, 00:37:46.014 "base_bdevs_list": [ 00:37:46.014 { 00:37:46.014 "name": null, 00:37:46.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.014 "is_configured": false, 00:37:46.014 "data_offset": 2048, 00:37:46.014 "data_size": 63488 00:37:46.014 }, 00:37:46.014 { 00:37:46.014 "name": "pt2", 00:37:46.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:46.014 "is_configured": true, 00:37:46.014 "data_offset": 2048, 00:37:46.014 "data_size": 63488 00:37:46.014 }, 00:37:46.014 { 00:37:46.014 "name": null, 00:37:46.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:46.014 "is_configured": false, 00:37:46.014 "data_offset": 2048, 00:37:46.014 "data_size": 63488 00:37:46.014 } 00:37:46.014 ] 00:37:46.014 }' 00:37:46.014 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:46.014 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:46.272 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:37:46.272 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.272 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:46.272 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:46.272 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:46.531 [2024-12-09 05:28:33.264765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:46.531 [2024-12-09 05:28:33.264852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:46.531 [2024-12-09 05:28:33.264883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:46.531 [2024-12-09 05:28:33.264896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:46.531 [2024-12-09 05:28:33.265430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:46.531 [2024-12-09 05:28:33.265485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:46.531 [2024-12-09 05:28:33.265571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:46.531 [2024-12-09 05:28:33.265600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:46.531 [2024-12-09 05:28:33.265750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:37:46.531 [2024-12-09 05:28:33.265780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:46.531 [2024-12-09 05:28:33.266162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:46.531 [2024-12-09 05:28:33.266366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:37:46.531 [2024-12-09 05:28:33.266390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:37:46.531 [2024-12-09 05:28:33.266579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:46.531 pt3 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:46.531 "name": "raid_bdev1", 00:37:46.531 "uuid": "3bf1f031-964c-40c5-8834-c4d620cf1c38", 00:37:46.531 "strip_size_kb": 0, 00:37:46.531 "state": "online", 00:37:46.531 "raid_level": "raid1", 00:37:46.531 "superblock": true, 00:37:46.531 "num_base_bdevs": 3, 00:37:46.531 "num_base_bdevs_discovered": 2, 00:37:46.531 "num_base_bdevs_operational": 2, 00:37:46.531 "base_bdevs_list": [ 00:37:46.531 { 00:37:46.531 "name": null, 00:37:46.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.531 "is_configured": false, 00:37:46.531 "data_offset": 2048, 00:37:46.531 "data_size": 63488 00:37:46.531 }, 00:37:46.531 { 00:37:46.531 "name": "pt2", 00:37:46.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:46.531 "is_configured": true, 00:37:46.531 "data_offset": 2048, 00:37:46.531 "data_size": 63488 00:37:46.531 }, 00:37:46.531 { 00:37:46.531 "name": "pt3", 00:37:46.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:46.531 "is_configured": true, 00:37:46.531 "data_offset": 2048, 00:37:46.531 "data_size": 63488 00:37:46.531 } 00:37:46.531 ] 00:37:46.531 }' 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:46.531 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:37:47.098 [2024-12-09 05:28:33.857279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3bf1f031-964c-40c5-8834-c4d620cf1c38 '!=' 3bf1f031-964c-40c5-8834-c4d620cf1c38 ']' 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68772 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68772 ']' 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68772 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68772 00:37:47.098 killing process with pid 68772 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68772' 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68772 00:37:47.098 [2024-12-09 05:28:33.942721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:47.098 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68772 00:37:47.098 [2024-12-09 05:28:33.942875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:47.098 [2024-12-09 05:28:33.942952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:47.098 [2024-12-09 05:28:33.942971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:37:47.356 [2024-12-09 05:28:34.192374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:48.740 ************************************ 00:37:48.740 END TEST raid_superblock_test 00:37:48.740 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:37:48.740 00:37:48.740 real 0m8.730s 00:37:48.740 user 0m14.223s 00:37:48.740 sys 0m1.283s 00:37:48.740 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.740 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.740 ************************************ 00:37:48.740 05:28:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:37:48.740 05:28:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:48.740 05:28:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.740 05:28:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:48.740 ************************************ 00:37:48.740 START TEST raid_read_error_test 00:37:48.740 ************************************ 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IHgpSSyMZj 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69229 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69229 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69229 ']' 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.740 05:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.740 [2024-12-09 05:28:35.464509] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:48.740 [2024-12-09 05:28:35.464671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69229 ] 00:37:48.740 [2024-12-09 05:28:35.635938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.998 [2024-12-09 05:28:35.773107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.256 [2024-12-09 05:28:35.973729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:49.256 [2024-12-09 05:28:35.973799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:49.515 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.515 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:37:49.515 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:49.515 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:49.515 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.515 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 BaseBdev1_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 true 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 [2024-12-09 05:28:36.532615] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:37:49.774 [2024-12-09 05:28:36.532713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.774 [2024-12-09 05:28:36.532743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:49.774 [2024-12-09 05:28:36.532761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.774 [2024-12-09 05:28:36.535652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.774 [2024-12-09 05:28:36.535715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:49.774 BaseBdev1 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 BaseBdev2_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 true 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 [2024-12-09 05:28:36.591748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:37:49.774 [2024-12-09 05:28:36.591848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.774 [2024-12-09 05:28:36.591876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:49.774 [2024-12-09 05:28:36.591893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.774 [2024-12-09 05:28:36.594810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.774 [2024-12-09 05:28:36.594886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:49.774 BaseBdev2 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 BaseBdev3_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 true 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.774 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.774 [2024-12-09 05:28:36.666842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:37:49.774 [2024-12-09 05:28:36.666965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.774 [2024-12-09 05:28:36.666995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:49.774 [2024-12-09 05:28:36.667014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.774 [2024-12-09 05:28:36.669974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.774 [2024-12-09 05:28:36.670051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:49.774 BaseBdev3 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.775 [2024-12-09 05:28:36.674971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:49.775 [2024-12-09 05:28:36.677469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:49.775 [2024-12-09 05:28:36.677575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:49.775 [2024-12-09 05:28:36.677891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:49.775 [2024-12-09 05:28:36.677911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:49.775 [2024-12-09 05:28:36.678246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:37:49.775 [2024-12-09 05:28:36.678506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:49.775 [2024-12-09 05:28:36.678524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:49.775 [2024-12-09 05:28:36.678707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:49.775 "name": "raid_bdev1", 00:37:49.775 "uuid": "04444f73-e064-470b-8832-b6c48cb468db", 00:37:49.775 "strip_size_kb": 0, 00:37:49.775 "state": "online", 00:37:49.775 "raid_level": "raid1", 00:37:49.775 "superblock": true, 00:37:49.775 "num_base_bdevs": 3, 00:37:49.775 "num_base_bdevs_discovered": 3, 00:37:49.775 "num_base_bdevs_operational": 3, 00:37:49.775 "base_bdevs_list": [ 00:37:49.775 { 00:37:49.775 "name": "BaseBdev1", 00:37:49.775 "uuid": "d184b87d-7970-5253-a3b7-41c519e8aa43", 00:37:49.775 "is_configured": true, 00:37:49.775 "data_offset": 2048, 00:37:49.775 "data_size": 63488 00:37:49.775 }, 00:37:49.775 { 00:37:49.775 "name": "BaseBdev2", 00:37:49.775 "uuid": "7e9e8e43-1d65-5d07-9cb9-d0ceb6d7091f", 00:37:49.775 "is_configured": true, 00:37:49.775 "data_offset": 2048, 00:37:49.775 "data_size": 63488 00:37:49.775 }, 00:37:49.775 { 00:37:49.775 "name": "BaseBdev3", 00:37:49.775 "uuid": "282ee79a-ce0b-5400-8574-a565d305746b", 00:37:49.775 "is_configured": true, 00:37:49.775 "data_offset": 2048, 00:37:49.775 "data_size": 63488 00:37:49.775 } 00:37:49.775 ] 00:37:49.775 }' 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:49.775 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:50.340 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:37:50.340 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:50.340 [2024-12-09 05:28:37.308972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.275 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.533 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:51.533 "name": "raid_bdev1", 00:37:51.533 "uuid": "04444f73-e064-470b-8832-b6c48cb468db", 00:37:51.533 "strip_size_kb": 0, 00:37:51.533 "state": "online", 00:37:51.533 "raid_level": "raid1", 00:37:51.533 "superblock": true, 00:37:51.533 "num_base_bdevs": 3, 00:37:51.533 "num_base_bdevs_discovered": 3, 00:37:51.533 "num_base_bdevs_operational": 3, 00:37:51.533 "base_bdevs_list": [ 00:37:51.533 { 00:37:51.533 "name": "BaseBdev1", 00:37:51.533 "uuid": "d184b87d-7970-5253-a3b7-41c519e8aa43", 00:37:51.533 "is_configured": true, 00:37:51.533 "data_offset": 2048, 00:37:51.533 "data_size": 63488 00:37:51.533 }, 00:37:51.533 { 00:37:51.533 "name": "BaseBdev2", 00:37:51.533 "uuid": "7e9e8e43-1d65-5d07-9cb9-d0ceb6d7091f", 00:37:51.533 "is_configured": true, 00:37:51.533 "data_offset": 2048, 00:37:51.533 "data_size": 63488 00:37:51.533 }, 00:37:51.533 { 00:37:51.533 "name": "BaseBdev3", 00:37:51.533 "uuid": "282ee79a-ce0b-5400-8574-a565d305746b", 00:37:51.533 "is_configured": true, 00:37:51.533 "data_offset": 2048, 00:37:51.533 "data_size": 63488 00:37:51.533 } 00:37:51.533 ] 00:37:51.533 }' 00:37:51.533 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:51.533 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.098 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:52.098 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.098 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.098 [2024-12-09 05:28:38.778651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:52.098 [2024-12-09 05:28:38.778697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:52.098 [2024-12-09 05:28:38.782297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:52.098 [2024-12-09 05:28:38.782520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:52.098 [2024-12-09 05:28:38.782844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:52.098 [2024-12-09 05:28:38.783036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:52.098 { 00:37:52.098 "results": [ 00:37:52.099 { 00:37:52.099 "job": "raid_bdev1", 00:37:52.099 "core_mask": "0x1", 00:37:52.099 "workload": "randrw", 00:37:52.099 "percentage": 50, 00:37:52.099 "status": "finished", 00:37:52.099 "queue_depth": 1, 00:37:52.099 "io_size": 131072, 00:37:52.099 "runtime": 1.466919, 00:37:52.099 "iops": 9192.054912370759, 00:37:52.099 "mibps": 1149.0068640463448, 00:37:52.099 "io_failed": 0, 00:37:52.099 "io_timeout": 0, 00:37:52.099 "avg_latency_us": 104.78330573609126, 00:37:52.099 "min_latency_us": 38.4, 00:37:52.099 "max_latency_us": 1846.9236363636364 00:37:52.099 } 00:37:52.099 ], 00:37:52.099 "core_count": 1 00:37:52.099 } 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69229 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69229 ']' 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69229 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69229 00:37:52.099 killing process with pid 69229 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69229' 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69229 00:37:52.099 [2024-12-09 05:28:38.829240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:52.099 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69229 00:37:52.099 [2024-12-09 05:28:39.020199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IHgpSSyMZj 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:37:53.475 00:37:53.475 real 0m4.841s 00:37:53.475 user 0m5.940s 00:37:53.475 sys 0m0.657s 00:37:53.475 ************************************ 00:37:53.475 END TEST raid_read_error_test 00:37:53.475 ************************************ 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.475 05:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.475 05:28:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:37:53.475 05:28:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:53.475 05:28:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.475 05:28:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:53.475 ************************************ 00:37:53.475 START TEST raid_write_error_test 00:37:53.475 ************************************ 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MRs5QwWOMT 00:37:53.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69369 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69369 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69369 ']' 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:53.475 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.475 [2024-12-09 05:28:40.396214] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:53.475 [2024-12-09 05:28:40.396397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69369 ] 00:37:53.734 [2024-12-09 05:28:40.584243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.993 [2024-12-09 05:28:40.725141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.993 [2024-12-09 05:28:40.934790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:53.993 [2024-12-09 05:28:40.934878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:54.561 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.561 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:37:54.561 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:54.561 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:54.561 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.561 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.561 BaseBdev1_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 true 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 [2024-12-09 05:28:41.403930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:37:54.562 [2024-12-09 05:28:41.404021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.562 [2024-12-09 05:28:41.404050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:54.562 [2024-12-09 05:28:41.404067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.562 [2024-12-09 05:28:41.406724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.562 [2024-12-09 05:28:41.406793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:54.562 BaseBdev1 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 BaseBdev2_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 true 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 [2024-12-09 05:28:41.465096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:37:54.562 [2024-12-09 05:28:41.465196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.562 [2024-12-09 05:28:41.465223] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:54.562 [2024-12-09 05:28:41.465240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.562 [2024-12-09 05:28:41.468118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.562 [2024-12-09 05:28:41.468182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:54.562 BaseBdev2 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 BaseBdev3_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 true 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.562 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.562 [2024-12-09 05:28:41.530518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:37:54.562 [2024-12-09 05:28:41.530632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.562 [2024-12-09 05:28:41.530676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:54.562 [2024-12-09 05:28:41.530694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.821 [2024-12-09 05:28:41.533631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.821 [2024-12-09 05:28:41.533695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:54.821 BaseBdev3 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.821 [2024-12-09 05:28:41.538663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:54.821 [2024-12-09 05:28:41.541134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:54.821 [2024-12-09 05:28:41.541352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:54.821 [2024-12-09 05:28:41.541668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:54.821 [2024-12-09 05:28:41.541807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:54.821 [2024-12-09 05:28:41.542173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:37:54.821 [2024-12-09 05:28:41.542592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:54.821 [2024-12-09 05:28:41.542725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:54.821 [2024-12-09 05:28:41.543075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.821 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:54.821 "name": "raid_bdev1", 00:37:54.821 "uuid": "456720de-025c-426b-a04d-8bf4f668cfb2", 00:37:54.822 "strip_size_kb": 0, 00:37:54.822 "state": "online", 00:37:54.822 "raid_level": "raid1", 00:37:54.822 "superblock": true, 00:37:54.822 "num_base_bdevs": 3, 00:37:54.822 "num_base_bdevs_discovered": 3, 00:37:54.822 "num_base_bdevs_operational": 3, 00:37:54.822 "base_bdevs_list": [ 00:37:54.822 { 00:37:54.822 "name": "BaseBdev1", 00:37:54.822 "uuid": "23383808-87dd-5f4b-8739-abc51c7faeb1", 00:37:54.822 "is_configured": true, 00:37:54.822 "data_offset": 2048, 00:37:54.822 "data_size": 63488 00:37:54.822 }, 00:37:54.822 { 00:37:54.822 "name": "BaseBdev2", 00:37:54.822 "uuid": "777cacf4-be91-590b-84ad-7f77d1d9bdb2", 00:37:54.822 "is_configured": true, 00:37:54.822 "data_offset": 2048, 00:37:54.822 "data_size": 63488 00:37:54.822 }, 00:37:54.822 { 00:37:54.822 "name": "BaseBdev3", 00:37:54.822 "uuid": "6c59e08a-3183-5bbb-bb67-74bd615d085a", 00:37:54.822 "is_configured": true, 00:37:54.822 "data_offset": 2048, 00:37:54.822 "data_size": 63488 00:37:54.822 } 00:37:54.822 ] 00:37:54.822 }' 00:37:54.822 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:54.822 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.389 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:55.390 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:37:55.390 [2024-12-09 05:28:42.204556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.326 [2024-12-09 05:28:43.075292] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:37:56.326 [2024-12-09 05:28:43.075392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:56.326 [2024-12-09 05:28:43.075665] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.326 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.327 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.327 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.327 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:56.327 "name": "raid_bdev1", 00:37:56.327 "uuid": "456720de-025c-426b-a04d-8bf4f668cfb2", 00:37:56.327 "strip_size_kb": 0, 00:37:56.327 "state": "online", 00:37:56.327 "raid_level": "raid1", 00:37:56.327 "superblock": true, 00:37:56.327 "num_base_bdevs": 3, 00:37:56.327 "num_base_bdevs_discovered": 2, 00:37:56.327 "num_base_bdevs_operational": 2, 00:37:56.327 "base_bdevs_list": [ 00:37:56.327 { 00:37:56.327 "name": null, 00:37:56.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.327 "is_configured": false, 00:37:56.327 "data_offset": 0, 00:37:56.327 "data_size": 63488 00:37:56.327 }, 00:37:56.327 { 00:37:56.327 "name": "BaseBdev2", 00:37:56.327 "uuid": "777cacf4-be91-590b-84ad-7f77d1d9bdb2", 00:37:56.327 "is_configured": true, 00:37:56.327 "data_offset": 2048, 00:37:56.327 "data_size": 63488 00:37:56.327 }, 00:37:56.327 { 00:37:56.327 "name": "BaseBdev3", 00:37:56.327 "uuid": "6c59e08a-3183-5bbb-bb67-74bd615d085a", 00:37:56.327 "is_configured": true, 00:37:56.327 "data_offset": 2048, 00:37:56.327 "data_size": 63488 00:37:56.327 } 00:37:56.327 ] 00:37:56.327 }' 00:37:56.327 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:56.327 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.895 [2024-12-09 05:28:43.630213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:56.895 [2024-12-09 05:28:43.630574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:56.895 [2024-12-09 05:28:43.633826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:56.895 [2024-12-09 05:28:43.634080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:56.895 [2024-12-09 05:28:43.634421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:56.895 [2024-12-09 05:28:43.634631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:56.895 { 00:37:56.895 "results": [ 00:37:56.895 { 00:37:56.895 "job": "raid_bdev1", 00:37:56.895 "core_mask": "0x1", 00:37:56.895 "workload": "randrw", 00:37:56.895 "percentage": 50, 00:37:56.895 "status": "finished", 00:37:56.895 "queue_depth": 1, 00:37:56.895 "io_size": 131072, 00:37:56.895 "runtime": 1.423837, 00:37:56.895 "iops": 10019.405311141654, 00:37:56.895 "mibps": 1252.4256638927068, 00:37:56.895 "io_failed": 0, 00:37:56.895 "io_timeout": 0, 00:37:56.895 "avg_latency_us": 95.706391037814, 00:37:56.895 "min_latency_us": 37.00363636363636, 00:37:56.895 "max_latency_us": 1869.2654545454545 00:37:56.895 } 00:37:56.895 ], 00:37:56.895 "core_count": 1 00:37:56.895 } 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69369 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69369 ']' 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69369 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69369 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69369' 00:37:56.895 killing process with pid 69369 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69369 00:37:56.895 [2024-12-09 05:28:43.677806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:56.895 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69369 00:37:56.895 [2024-12-09 05:28:43.857619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MRs5QwWOMT 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:37:58.281 ************************************ 00:37:58.281 END TEST raid_write_error_test 00:37:58.281 ************************************ 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:37:58.281 00:37:58.281 real 0m4.782s 00:37:58.281 user 0m5.873s 00:37:58.281 sys 0m0.654s 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.281 05:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:58.281 05:28:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:37:58.281 05:28:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:37:58.281 05:28:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:37:58.281 05:28:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:58.281 05:28:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.281 05:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:58.281 ************************************ 00:37:58.281 START TEST raid_state_function_test 00:37:58.281 ************************************ 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:37:58.281 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:37:58.282 Process raid pid: 69517 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69517 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69517' 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69517 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69517 ']' 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.282 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:58.282 [2024-12-09 05:28:45.210431] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:37:58.282 [2024-12-09 05:28:45.210996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.545 [2024-12-09 05:28:45.388659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.811 [2024-12-09 05:28:45.551724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.070 [2024-12-09 05:28:45.782688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:59.070 [2024-12-09 05:28:45.782744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.329 [2024-12-09 05:28:46.241738] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:59.329 [2024-12-09 05:28:46.241855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:59.329 [2024-12-09 05:28:46.241874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:59.329 [2024-12-09 05:28:46.241892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:59.329 [2024-12-09 05:28:46.241902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:59.329 [2024-12-09 05:28:46.241917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:59.329 [2024-12-09 05:28:46.241926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:59.329 [2024-12-09 05:28:46.241939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:59.329 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.588 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:59.588 "name": "Existed_Raid", 00:37:59.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.588 "strip_size_kb": 64, 00:37:59.588 "state": "configuring", 00:37:59.588 "raid_level": "raid0", 00:37:59.588 "superblock": false, 00:37:59.588 "num_base_bdevs": 4, 00:37:59.588 "num_base_bdevs_discovered": 0, 00:37:59.588 "num_base_bdevs_operational": 4, 00:37:59.588 "base_bdevs_list": [ 00:37:59.588 { 00:37:59.588 "name": "BaseBdev1", 00:37:59.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.588 "is_configured": false, 00:37:59.588 "data_offset": 0, 00:37:59.588 "data_size": 0 00:37:59.588 }, 00:37:59.588 { 00:37:59.588 "name": "BaseBdev2", 00:37:59.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.588 "is_configured": false, 00:37:59.588 "data_offset": 0, 00:37:59.588 "data_size": 0 00:37:59.588 }, 00:37:59.588 { 00:37:59.588 "name": "BaseBdev3", 00:37:59.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.588 "is_configured": false, 00:37:59.588 "data_offset": 0, 00:37:59.588 "data_size": 0 00:37:59.588 }, 00:37:59.588 { 00:37:59.588 "name": "BaseBdev4", 00:37:59.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.588 "is_configured": false, 00:37:59.588 "data_offset": 0, 00:37:59.588 "data_size": 0 00:37:59.588 } 00:37:59.588 ] 00:37:59.588 }' 00:37:59.588 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:59.588 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.847 [2024-12-09 05:28:46.761788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:59.847 [2024-12-09 05:28:46.761881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.847 [2024-12-09 05:28:46.769824] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:59.847 [2024-12-09 05:28:46.769894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:59.847 [2024-12-09 05:28:46.769909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:59.847 [2024-12-09 05:28:46.769924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:59.847 [2024-12-09 05:28:46.769933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:59.847 [2024-12-09 05:28:46.769947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:59.847 [2024-12-09 05:28:46.769956] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:59.847 [2024-12-09 05:28:46.769968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:59.847 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.848 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:59.848 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.848 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.848 [2024-12-09 05:28:46.817684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:00.106 BaseBdev1 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.106 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.106 [ 00:38:00.106 { 00:38:00.106 "name": "BaseBdev1", 00:38:00.106 "aliases": [ 00:38:00.106 "a54d5aee-2731-4ca2-9982-5e59c5a0f985" 00:38:00.106 ], 00:38:00.106 "product_name": "Malloc disk", 00:38:00.106 "block_size": 512, 00:38:00.106 "num_blocks": 65536, 00:38:00.106 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:00.106 "assigned_rate_limits": { 00:38:00.106 "rw_ios_per_sec": 0, 00:38:00.106 "rw_mbytes_per_sec": 0, 00:38:00.106 "r_mbytes_per_sec": 0, 00:38:00.106 "w_mbytes_per_sec": 0 00:38:00.106 }, 00:38:00.106 "claimed": true, 00:38:00.106 "claim_type": "exclusive_write", 00:38:00.106 "zoned": false, 00:38:00.106 "supported_io_types": { 00:38:00.106 "read": true, 00:38:00.106 "write": true, 00:38:00.106 "unmap": true, 00:38:00.106 "flush": true, 00:38:00.106 "reset": true, 00:38:00.106 "nvme_admin": false, 00:38:00.106 "nvme_io": false, 00:38:00.106 "nvme_io_md": false, 00:38:00.106 "write_zeroes": true, 00:38:00.106 "zcopy": true, 00:38:00.106 "get_zone_info": false, 00:38:00.106 "zone_management": false, 00:38:00.106 "zone_append": false, 00:38:00.106 "compare": false, 00:38:00.106 "compare_and_write": false, 00:38:00.106 "abort": true, 00:38:00.106 "seek_hole": false, 00:38:00.106 "seek_data": false, 00:38:00.106 "copy": true, 00:38:00.106 "nvme_iov_md": false 00:38:00.106 }, 00:38:00.106 "memory_domains": [ 00:38:00.106 { 00:38:00.106 "dma_device_id": "system", 00:38:00.106 "dma_device_type": 1 00:38:00.106 }, 00:38:00.106 { 00:38:00.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:00.106 "dma_device_type": 2 00:38:00.106 } 00:38:00.106 ], 00:38:00.106 "driver_specific": {} 00:38:00.106 } 00:38:00.106 ] 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:00.107 "name": "Existed_Raid", 00:38:00.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.107 "strip_size_kb": 64, 00:38:00.107 "state": "configuring", 00:38:00.107 "raid_level": "raid0", 00:38:00.107 "superblock": false, 00:38:00.107 "num_base_bdevs": 4, 00:38:00.107 "num_base_bdevs_discovered": 1, 00:38:00.107 "num_base_bdevs_operational": 4, 00:38:00.107 "base_bdevs_list": [ 00:38:00.107 { 00:38:00.107 "name": "BaseBdev1", 00:38:00.107 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:00.107 "is_configured": true, 00:38:00.107 "data_offset": 0, 00:38:00.107 "data_size": 65536 00:38:00.107 }, 00:38:00.107 { 00:38:00.107 "name": "BaseBdev2", 00:38:00.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.107 "is_configured": false, 00:38:00.107 "data_offset": 0, 00:38:00.107 "data_size": 0 00:38:00.107 }, 00:38:00.107 { 00:38:00.107 "name": "BaseBdev3", 00:38:00.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.107 "is_configured": false, 00:38:00.107 "data_offset": 0, 00:38:00.107 "data_size": 0 00:38:00.107 }, 00:38:00.107 { 00:38:00.107 "name": "BaseBdev4", 00:38:00.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.107 "is_configured": false, 00:38:00.107 "data_offset": 0, 00:38:00.107 "data_size": 0 00:38:00.107 } 00:38:00.107 ] 00:38:00.107 }' 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:00.107 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.675 [2024-12-09 05:28:47.429873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:00.675 [2024-12-09 05:28:47.429920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.675 [2024-12-09 05:28:47.437939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:00.675 [2024-12-09 05:28:47.440352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:00.675 [2024-12-09 05:28:47.440421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:00.675 [2024-12-09 05:28:47.440438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:00.675 [2024-12-09 05:28:47.440455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:00.675 [2024-12-09 05:28:47.440465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:00.675 [2024-12-09 05:28:47.440507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:00.675 "name": "Existed_Raid", 00:38:00.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.675 "strip_size_kb": 64, 00:38:00.675 "state": "configuring", 00:38:00.675 "raid_level": "raid0", 00:38:00.675 "superblock": false, 00:38:00.675 "num_base_bdevs": 4, 00:38:00.675 "num_base_bdevs_discovered": 1, 00:38:00.675 "num_base_bdevs_operational": 4, 00:38:00.675 "base_bdevs_list": [ 00:38:00.675 { 00:38:00.675 "name": "BaseBdev1", 00:38:00.675 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:00.675 "is_configured": true, 00:38:00.675 "data_offset": 0, 00:38:00.675 "data_size": 65536 00:38:00.675 }, 00:38:00.675 { 00:38:00.675 "name": "BaseBdev2", 00:38:00.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.675 "is_configured": false, 00:38:00.675 "data_offset": 0, 00:38:00.675 "data_size": 0 00:38:00.675 }, 00:38:00.675 { 00:38:00.675 "name": "BaseBdev3", 00:38:00.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.675 "is_configured": false, 00:38:00.675 "data_offset": 0, 00:38:00.675 "data_size": 0 00:38:00.675 }, 00:38:00.675 { 00:38:00.675 "name": "BaseBdev4", 00:38:00.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.675 "is_configured": false, 00:38:00.675 "data_offset": 0, 00:38:00.675 "data_size": 0 00:38:00.675 } 00:38:00.675 ] 00:38:00.675 }' 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:00.675 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.242 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.242 [2024-12-09 05:28:48.042584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:01.242 BaseBdev2 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.242 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.242 [ 00:38:01.242 { 00:38:01.242 "name": "BaseBdev2", 00:38:01.242 "aliases": [ 00:38:01.242 "698a5222-ac8c-4f72-a8b7-3fdbaccba50b" 00:38:01.242 ], 00:38:01.242 "product_name": "Malloc disk", 00:38:01.242 "block_size": 512, 00:38:01.242 "num_blocks": 65536, 00:38:01.242 "uuid": "698a5222-ac8c-4f72-a8b7-3fdbaccba50b", 00:38:01.242 "assigned_rate_limits": { 00:38:01.242 "rw_ios_per_sec": 0, 00:38:01.242 "rw_mbytes_per_sec": 0, 00:38:01.242 "r_mbytes_per_sec": 0, 00:38:01.242 "w_mbytes_per_sec": 0 00:38:01.242 }, 00:38:01.242 "claimed": true, 00:38:01.242 "claim_type": "exclusive_write", 00:38:01.242 "zoned": false, 00:38:01.242 "supported_io_types": { 00:38:01.242 "read": true, 00:38:01.242 "write": true, 00:38:01.242 "unmap": true, 00:38:01.242 "flush": true, 00:38:01.242 "reset": true, 00:38:01.242 "nvme_admin": false, 00:38:01.242 "nvme_io": false, 00:38:01.242 "nvme_io_md": false, 00:38:01.242 "write_zeroes": true, 00:38:01.242 "zcopy": true, 00:38:01.242 "get_zone_info": false, 00:38:01.242 "zone_management": false, 00:38:01.242 "zone_append": false, 00:38:01.242 "compare": false, 00:38:01.242 "compare_and_write": false, 00:38:01.242 "abort": true, 00:38:01.242 "seek_hole": false, 00:38:01.242 "seek_data": false, 00:38:01.242 "copy": true, 00:38:01.242 "nvme_iov_md": false 00:38:01.242 }, 00:38:01.242 "memory_domains": [ 00:38:01.242 { 00:38:01.242 "dma_device_id": "system", 00:38:01.242 "dma_device_type": 1 00:38:01.242 }, 00:38:01.242 { 00:38:01.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:01.242 "dma_device_type": 2 00:38:01.242 } 00:38:01.242 ], 00:38:01.242 "driver_specific": {} 00:38:01.242 } 00:38:01.242 ] 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:01.243 "name": "Existed_Raid", 00:38:01.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.243 "strip_size_kb": 64, 00:38:01.243 "state": "configuring", 00:38:01.243 "raid_level": "raid0", 00:38:01.243 "superblock": false, 00:38:01.243 "num_base_bdevs": 4, 00:38:01.243 "num_base_bdevs_discovered": 2, 00:38:01.243 "num_base_bdevs_operational": 4, 00:38:01.243 "base_bdevs_list": [ 00:38:01.243 { 00:38:01.243 "name": "BaseBdev1", 00:38:01.243 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:01.243 "is_configured": true, 00:38:01.243 "data_offset": 0, 00:38:01.243 "data_size": 65536 00:38:01.243 }, 00:38:01.243 { 00:38:01.243 "name": "BaseBdev2", 00:38:01.243 "uuid": "698a5222-ac8c-4f72-a8b7-3fdbaccba50b", 00:38:01.243 "is_configured": true, 00:38:01.243 "data_offset": 0, 00:38:01.243 "data_size": 65536 00:38:01.243 }, 00:38:01.243 { 00:38:01.243 "name": "BaseBdev3", 00:38:01.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.243 "is_configured": false, 00:38:01.243 "data_offset": 0, 00:38:01.243 "data_size": 0 00:38:01.243 }, 00:38:01.243 { 00:38:01.243 "name": "BaseBdev4", 00:38:01.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.243 "is_configured": false, 00:38:01.243 "data_offset": 0, 00:38:01.243 "data_size": 0 00:38:01.243 } 00:38:01.243 ] 00:38:01.243 }' 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:01.243 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.821 [2024-12-09 05:28:48.691387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:01.821 BaseBdev3 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.821 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.821 [ 00:38:01.821 { 00:38:01.821 "name": "BaseBdev3", 00:38:01.821 "aliases": [ 00:38:01.822 "371ae5a2-a5ff-44a2-9752-c2b08e2fe4e2" 00:38:01.822 ], 00:38:01.822 "product_name": "Malloc disk", 00:38:01.822 "block_size": 512, 00:38:01.822 "num_blocks": 65536, 00:38:01.822 "uuid": "371ae5a2-a5ff-44a2-9752-c2b08e2fe4e2", 00:38:01.822 "assigned_rate_limits": { 00:38:01.822 "rw_ios_per_sec": 0, 00:38:01.822 "rw_mbytes_per_sec": 0, 00:38:01.822 "r_mbytes_per_sec": 0, 00:38:01.822 "w_mbytes_per_sec": 0 00:38:01.822 }, 00:38:01.822 "claimed": true, 00:38:01.822 "claim_type": "exclusive_write", 00:38:01.822 "zoned": false, 00:38:01.822 "supported_io_types": { 00:38:01.822 "read": true, 00:38:01.822 "write": true, 00:38:01.822 "unmap": true, 00:38:01.822 "flush": true, 00:38:01.822 "reset": true, 00:38:01.822 "nvme_admin": false, 00:38:01.822 "nvme_io": false, 00:38:01.822 "nvme_io_md": false, 00:38:01.822 "write_zeroes": true, 00:38:01.822 "zcopy": true, 00:38:01.822 "get_zone_info": false, 00:38:01.822 "zone_management": false, 00:38:01.822 "zone_append": false, 00:38:01.822 "compare": false, 00:38:01.822 "compare_and_write": false, 00:38:01.822 "abort": true, 00:38:01.822 "seek_hole": false, 00:38:01.822 "seek_data": false, 00:38:01.822 "copy": true, 00:38:01.822 "nvme_iov_md": false 00:38:01.822 }, 00:38:01.822 "memory_domains": [ 00:38:01.822 { 00:38:01.822 "dma_device_id": "system", 00:38:01.822 "dma_device_type": 1 00:38:01.822 }, 00:38:01.822 { 00:38:01.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:01.822 "dma_device_type": 2 00:38:01.822 } 00:38:01.822 ], 00:38:01.822 "driver_specific": {} 00:38:01.822 } 00:38:01.822 ] 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:01.822 "name": "Existed_Raid", 00:38:01.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.822 "strip_size_kb": 64, 00:38:01.822 "state": "configuring", 00:38:01.822 "raid_level": "raid0", 00:38:01.822 "superblock": false, 00:38:01.822 "num_base_bdevs": 4, 00:38:01.822 "num_base_bdevs_discovered": 3, 00:38:01.822 "num_base_bdevs_operational": 4, 00:38:01.822 "base_bdevs_list": [ 00:38:01.822 { 00:38:01.822 "name": "BaseBdev1", 00:38:01.822 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:01.822 "is_configured": true, 00:38:01.822 "data_offset": 0, 00:38:01.822 "data_size": 65536 00:38:01.822 }, 00:38:01.822 { 00:38:01.822 "name": "BaseBdev2", 00:38:01.822 "uuid": "698a5222-ac8c-4f72-a8b7-3fdbaccba50b", 00:38:01.822 "is_configured": true, 00:38:01.822 "data_offset": 0, 00:38:01.822 "data_size": 65536 00:38:01.822 }, 00:38:01.822 { 00:38:01.822 "name": "BaseBdev3", 00:38:01.822 "uuid": "371ae5a2-a5ff-44a2-9752-c2b08e2fe4e2", 00:38:01.822 "is_configured": true, 00:38:01.822 "data_offset": 0, 00:38:01.822 "data_size": 65536 00:38:01.822 }, 00:38:01.822 { 00:38:01.822 "name": "BaseBdev4", 00:38:01.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.822 "is_configured": false, 00:38:01.822 "data_offset": 0, 00:38:01.822 "data_size": 0 00:38:01.822 } 00:38:01.822 ] 00:38:01.822 }' 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:01.822 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.387 [2024-12-09 05:28:49.322113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:02.387 [2024-12-09 05:28:49.322220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:02.387 [2024-12-09 05:28:49.322236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:38:02.387 [2024-12-09 05:28:49.322637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:02.387 [2024-12-09 05:28:49.322889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:02.387 [2024-12-09 05:28:49.322909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:38:02.387 [2024-12-09 05:28:49.323287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:02.387 BaseBdev4 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.387 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.387 [ 00:38:02.387 { 00:38:02.387 "name": "BaseBdev4", 00:38:02.387 "aliases": [ 00:38:02.387 "027a42ae-9044-4e0b-976a-d9a93051aaa2" 00:38:02.387 ], 00:38:02.387 "product_name": "Malloc disk", 00:38:02.387 "block_size": 512, 00:38:02.387 "num_blocks": 65536, 00:38:02.387 "uuid": "027a42ae-9044-4e0b-976a-d9a93051aaa2", 00:38:02.387 "assigned_rate_limits": { 00:38:02.387 "rw_ios_per_sec": 0, 00:38:02.387 "rw_mbytes_per_sec": 0, 00:38:02.387 "r_mbytes_per_sec": 0, 00:38:02.387 "w_mbytes_per_sec": 0 00:38:02.387 }, 00:38:02.387 "claimed": true, 00:38:02.387 "claim_type": "exclusive_write", 00:38:02.387 "zoned": false, 00:38:02.387 "supported_io_types": { 00:38:02.387 "read": true, 00:38:02.387 "write": true, 00:38:02.387 "unmap": true, 00:38:02.387 "flush": true, 00:38:02.387 "reset": true, 00:38:02.387 "nvme_admin": false, 00:38:02.387 "nvme_io": false, 00:38:02.387 "nvme_io_md": false, 00:38:02.387 "write_zeroes": true, 00:38:02.387 "zcopy": true, 00:38:02.387 "get_zone_info": false, 00:38:02.387 "zone_management": false, 00:38:02.387 "zone_append": false, 00:38:02.387 "compare": false, 00:38:02.387 "compare_and_write": false, 00:38:02.387 "abort": true, 00:38:02.387 "seek_hole": false, 00:38:02.387 "seek_data": false, 00:38:02.387 "copy": true, 00:38:02.387 "nvme_iov_md": false 00:38:02.387 }, 00:38:02.387 "memory_domains": [ 00:38:02.387 { 00:38:02.387 "dma_device_id": "system", 00:38:02.387 "dma_device_type": 1 00:38:02.646 }, 00:38:02.646 { 00:38:02.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.646 "dma_device_type": 2 00:38:02.646 } 00:38:02.646 ], 00:38:02.646 "driver_specific": {} 00:38:02.646 } 00:38:02.646 ] 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:02.646 "name": "Existed_Raid", 00:38:02.646 "uuid": "6af50a1c-5856-4abd-8f65-10b82a7e8fad", 00:38:02.646 "strip_size_kb": 64, 00:38:02.646 "state": "online", 00:38:02.646 "raid_level": "raid0", 00:38:02.646 "superblock": false, 00:38:02.646 "num_base_bdevs": 4, 00:38:02.646 "num_base_bdevs_discovered": 4, 00:38:02.646 "num_base_bdevs_operational": 4, 00:38:02.646 "base_bdevs_list": [ 00:38:02.646 { 00:38:02.646 "name": "BaseBdev1", 00:38:02.646 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:02.646 "is_configured": true, 00:38:02.646 "data_offset": 0, 00:38:02.646 "data_size": 65536 00:38:02.646 }, 00:38:02.646 { 00:38:02.646 "name": "BaseBdev2", 00:38:02.646 "uuid": "698a5222-ac8c-4f72-a8b7-3fdbaccba50b", 00:38:02.646 "is_configured": true, 00:38:02.646 "data_offset": 0, 00:38:02.646 "data_size": 65536 00:38:02.646 }, 00:38:02.646 { 00:38:02.646 "name": "BaseBdev3", 00:38:02.646 "uuid": "371ae5a2-a5ff-44a2-9752-c2b08e2fe4e2", 00:38:02.646 "is_configured": true, 00:38:02.646 "data_offset": 0, 00:38:02.646 "data_size": 65536 00:38:02.646 }, 00:38:02.646 { 00:38:02.646 "name": "BaseBdev4", 00:38:02.646 "uuid": "027a42ae-9044-4e0b-976a-d9a93051aaa2", 00:38:02.646 "is_configured": true, 00:38:02.646 "data_offset": 0, 00:38:02.646 "data_size": 65536 00:38:02.646 } 00:38:02.646 ] 00:38:02.646 }' 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:02.646 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:03.213 [2024-12-09 05:28:49.922913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.213 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:03.213 "name": "Existed_Raid", 00:38:03.213 "aliases": [ 00:38:03.213 "6af50a1c-5856-4abd-8f65-10b82a7e8fad" 00:38:03.213 ], 00:38:03.213 "product_name": "Raid Volume", 00:38:03.213 "block_size": 512, 00:38:03.213 "num_blocks": 262144, 00:38:03.213 "uuid": "6af50a1c-5856-4abd-8f65-10b82a7e8fad", 00:38:03.213 "assigned_rate_limits": { 00:38:03.213 "rw_ios_per_sec": 0, 00:38:03.213 "rw_mbytes_per_sec": 0, 00:38:03.213 "r_mbytes_per_sec": 0, 00:38:03.213 "w_mbytes_per_sec": 0 00:38:03.213 }, 00:38:03.213 "claimed": false, 00:38:03.213 "zoned": false, 00:38:03.213 "supported_io_types": { 00:38:03.213 "read": true, 00:38:03.213 "write": true, 00:38:03.213 "unmap": true, 00:38:03.213 "flush": true, 00:38:03.213 "reset": true, 00:38:03.213 "nvme_admin": false, 00:38:03.213 "nvme_io": false, 00:38:03.213 "nvme_io_md": false, 00:38:03.213 "write_zeroes": true, 00:38:03.213 "zcopy": false, 00:38:03.213 "get_zone_info": false, 00:38:03.213 "zone_management": false, 00:38:03.213 "zone_append": false, 00:38:03.213 "compare": false, 00:38:03.213 "compare_and_write": false, 00:38:03.213 "abort": false, 00:38:03.213 "seek_hole": false, 00:38:03.213 "seek_data": false, 00:38:03.213 "copy": false, 00:38:03.213 "nvme_iov_md": false 00:38:03.213 }, 00:38:03.213 "memory_domains": [ 00:38:03.213 { 00:38:03.213 "dma_device_id": "system", 00:38:03.213 "dma_device_type": 1 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:03.213 "dma_device_type": 2 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "system", 00:38:03.213 "dma_device_type": 1 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:03.213 "dma_device_type": 2 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "system", 00:38:03.213 "dma_device_type": 1 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:03.213 "dma_device_type": 2 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "system", 00:38:03.213 "dma_device_type": 1 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:03.213 "dma_device_type": 2 00:38:03.213 } 00:38:03.213 ], 00:38:03.213 "driver_specific": { 00:38:03.213 "raid": { 00:38:03.213 "uuid": "6af50a1c-5856-4abd-8f65-10b82a7e8fad", 00:38:03.213 "strip_size_kb": 64, 00:38:03.213 "state": "online", 00:38:03.213 "raid_level": "raid0", 00:38:03.213 "superblock": false, 00:38:03.213 "num_base_bdevs": 4, 00:38:03.213 "num_base_bdevs_discovered": 4, 00:38:03.213 "num_base_bdevs_operational": 4, 00:38:03.213 "base_bdevs_list": [ 00:38:03.213 { 00:38:03.213 "name": "BaseBdev1", 00:38:03.213 "uuid": "a54d5aee-2731-4ca2-9982-5e59c5a0f985", 00:38:03.213 "is_configured": true, 00:38:03.213 "data_offset": 0, 00:38:03.213 "data_size": 65536 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "name": "BaseBdev2", 00:38:03.213 "uuid": "698a5222-ac8c-4f72-a8b7-3fdbaccba50b", 00:38:03.213 "is_configured": true, 00:38:03.213 "data_offset": 0, 00:38:03.213 "data_size": 65536 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "name": "BaseBdev3", 00:38:03.213 "uuid": "371ae5a2-a5ff-44a2-9752-c2b08e2fe4e2", 00:38:03.213 "is_configured": true, 00:38:03.213 "data_offset": 0, 00:38:03.213 "data_size": 65536 00:38:03.213 }, 00:38:03.213 { 00:38:03.213 "name": "BaseBdev4", 00:38:03.213 "uuid": "027a42ae-9044-4e0b-976a-d9a93051aaa2", 00:38:03.213 "is_configured": true, 00:38:03.213 "data_offset": 0, 00:38:03.213 "data_size": 65536 00:38:03.213 } 00:38:03.213 ] 00:38:03.213 } 00:38:03.213 } 00:38:03.213 }' 00:38:03.214 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:03.214 BaseBdev2 00:38:03.214 BaseBdev3 00:38:03.214 BaseBdev4' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.214 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.473 [2024-12-09 05:28:50.298672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:03.473 [2024-12-09 05:28:50.298725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:03.473 [2024-12-09 05:28:50.298828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.473 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.732 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:03.732 "name": "Existed_Raid", 00:38:03.732 "uuid": "6af50a1c-5856-4abd-8f65-10b82a7e8fad", 00:38:03.732 "strip_size_kb": 64, 00:38:03.732 "state": "offline", 00:38:03.732 "raid_level": "raid0", 00:38:03.732 "superblock": false, 00:38:03.732 "num_base_bdevs": 4, 00:38:03.732 "num_base_bdevs_discovered": 3, 00:38:03.732 "num_base_bdevs_operational": 3, 00:38:03.732 "base_bdevs_list": [ 00:38:03.732 { 00:38:03.732 "name": null, 00:38:03.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:03.732 "is_configured": false, 00:38:03.732 "data_offset": 0, 00:38:03.732 "data_size": 65536 00:38:03.732 }, 00:38:03.732 { 00:38:03.732 "name": "BaseBdev2", 00:38:03.732 "uuid": "698a5222-ac8c-4f72-a8b7-3fdbaccba50b", 00:38:03.732 "is_configured": true, 00:38:03.732 "data_offset": 0, 00:38:03.732 "data_size": 65536 00:38:03.732 }, 00:38:03.732 { 00:38:03.732 "name": "BaseBdev3", 00:38:03.732 "uuid": "371ae5a2-a5ff-44a2-9752-c2b08e2fe4e2", 00:38:03.732 "is_configured": true, 00:38:03.732 "data_offset": 0, 00:38:03.732 "data_size": 65536 00:38:03.732 }, 00:38:03.732 { 00:38:03.732 "name": "BaseBdev4", 00:38:03.732 "uuid": "027a42ae-9044-4e0b-976a-d9a93051aaa2", 00:38:03.732 "is_configured": true, 00:38:03.732 "data_offset": 0, 00:38:03.732 "data_size": 65536 00:38:03.732 } 00:38:03.732 ] 00:38:03.732 }' 00:38:03.732 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:03.732 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.991 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.252 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:04.252 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:04.252 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:04.252 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.252 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.252 [2024-12-09 05:28:50.978397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.252 [2024-12-09 05:28:51.116924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.252 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.511 [2024-12-09 05:28:51.265397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:38:04.511 [2024-12-09 05:28:51.265466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.511 BaseBdev2 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.511 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.511 [ 00:38:04.511 { 00:38:04.511 "name": "BaseBdev2", 00:38:04.511 "aliases": [ 00:38:04.511 "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7" 00:38:04.511 ], 00:38:04.511 "product_name": "Malloc disk", 00:38:04.511 "block_size": 512, 00:38:04.511 "num_blocks": 65536, 00:38:04.511 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:04.511 "assigned_rate_limits": { 00:38:04.511 "rw_ios_per_sec": 0, 00:38:04.511 "rw_mbytes_per_sec": 0, 00:38:04.511 "r_mbytes_per_sec": 0, 00:38:04.511 "w_mbytes_per_sec": 0 00:38:04.511 }, 00:38:04.511 "claimed": false, 00:38:04.511 "zoned": false, 00:38:04.511 "supported_io_types": { 00:38:04.511 "read": true, 00:38:04.511 "write": true, 00:38:04.511 "unmap": true, 00:38:04.511 "flush": true, 00:38:04.511 "reset": true, 00:38:04.511 "nvme_admin": false, 00:38:04.511 "nvme_io": false, 00:38:04.511 "nvme_io_md": false, 00:38:04.511 "write_zeroes": true, 00:38:04.511 "zcopy": true, 00:38:04.512 "get_zone_info": false, 00:38:04.512 "zone_management": false, 00:38:04.512 "zone_append": false, 00:38:04.512 "compare": false, 00:38:04.512 "compare_and_write": false, 00:38:04.512 "abort": true, 00:38:04.512 "seek_hole": false, 00:38:04.512 "seek_data": false, 00:38:04.512 "copy": true, 00:38:04.512 "nvme_iov_md": false 00:38:04.512 }, 00:38:04.512 "memory_domains": [ 00:38:04.512 { 00:38:04.512 "dma_device_id": "system", 00:38:04.512 "dma_device_type": 1 00:38:04.512 }, 00:38:04.512 { 00:38:04.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:04.512 "dma_device_type": 2 00:38:04.512 } 00:38:04.512 ], 00:38:04.512 "driver_specific": {} 00:38:04.512 } 00:38:04.512 ] 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.512 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.770 BaseBdev3 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.770 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 [ 00:38:04.771 { 00:38:04.771 "name": "BaseBdev3", 00:38:04.771 "aliases": [ 00:38:04.771 "e25f588d-3d70-487f-8251-f5406c7ae01a" 00:38:04.771 ], 00:38:04.771 "product_name": "Malloc disk", 00:38:04.771 "block_size": 512, 00:38:04.771 "num_blocks": 65536, 00:38:04.771 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:04.771 "assigned_rate_limits": { 00:38:04.771 "rw_ios_per_sec": 0, 00:38:04.771 "rw_mbytes_per_sec": 0, 00:38:04.771 "r_mbytes_per_sec": 0, 00:38:04.771 "w_mbytes_per_sec": 0 00:38:04.771 }, 00:38:04.771 "claimed": false, 00:38:04.771 "zoned": false, 00:38:04.771 "supported_io_types": { 00:38:04.771 "read": true, 00:38:04.771 "write": true, 00:38:04.771 "unmap": true, 00:38:04.771 "flush": true, 00:38:04.771 "reset": true, 00:38:04.771 "nvme_admin": false, 00:38:04.771 "nvme_io": false, 00:38:04.771 "nvme_io_md": false, 00:38:04.771 "write_zeroes": true, 00:38:04.771 "zcopy": true, 00:38:04.771 "get_zone_info": false, 00:38:04.771 "zone_management": false, 00:38:04.771 "zone_append": false, 00:38:04.771 "compare": false, 00:38:04.771 "compare_and_write": false, 00:38:04.771 "abort": true, 00:38:04.771 "seek_hole": false, 00:38:04.771 "seek_data": false, 00:38:04.771 "copy": true, 00:38:04.771 "nvme_iov_md": false 00:38:04.771 }, 00:38:04.771 "memory_domains": [ 00:38:04.771 { 00:38:04.771 "dma_device_id": "system", 00:38:04.771 "dma_device_type": 1 00:38:04.771 }, 00:38:04.771 { 00:38:04.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:04.771 "dma_device_type": 2 00:38:04.771 } 00:38:04.771 ], 00:38:04.771 "driver_specific": {} 00:38:04.771 } 00:38:04.771 ] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 BaseBdev4 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 [ 00:38:04.771 { 00:38:04.771 "name": "BaseBdev4", 00:38:04.771 "aliases": [ 00:38:04.771 "a8df8e1e-24f1-4c77-addd-0bea74dd94bd" 00:38:04.771 ], 00:38:04.771 "product_name": "Malloc disk", 00:38:04.771 "block_size": 512, 00:38:04.771 "num_blocks": 65536, 00:38:04.771 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:04.771 "assigned_rate_limits": { 00:38:04.771 "rw_ios_per_sec": 0, 00:38:04.771 "rw_mbytes_per_sec": 0, 00:38:04.771 "r_mbytes_per_sec": 0, 00:38:04.771 "w_mbytes_per_sec": 0 00:38:04.771 }, 00:38:04.771 "claimed": false, 00:38:04.771 "zoned": false, 00:38:04.771 "supported_io_types": { 00:38:04.771 "read": true, 00:38:04.771 "write": true, 00:38:04.771 "unmap": true, 00:38:04.771 "flush": true, 00:38:04.771 "reset": true, 00:38:04.771 "nvme_admin": false, 00:38:04.771 "nvme_io": false, 00:38:04.771 "nvme_io_md": false, 00:38:04.771 "write_zeroes": true, 00:38:04.771 "zcopy": true, 00:38:04.771 "get_zone_info": false, 00:38:04.771 "zone_management": false, 00:38:04.771 "zone_append": false, 00:38:04.771 "compare": false, 00:38:04.771 "compare_and_write": false, 00:38:04.771 "abort": true, 00:38:04.771 "seek_hole": false, 00:38:04.771 "seek_data": false, 00:38:04.771 "copy": true, 00:38:04.771 "nvme_iov_md": false 00:38:04.771 }, 00:38:04.771 "memory_domains": [ 00:38:04.771 { 00:38:04.771 "dma_device_id": "system", 00:38:04.771 "dma_device_type": 1 00:38:04.771 }, 00:38:04.771 { 00:38:04.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:04.771 "dma_device_type": 2 00:38:04.771 } 00:38:04.771 ], 00:38:04.771 "driver_specific": {} 00:38:04.771 } 00:38:04.771 ] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 [2024-12-09 05:28:51.649136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:04.771 [2024-12-09 05:28:51.649190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:04.771 [2024-12-09 05:28:51.649236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:04.771 [2024-12-09 05:28:51.651767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:04.771 [2024-12-09 05:28:51.651887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.771 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:04.771 "name": "Existed_Raid", 00:38:04.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.771 "strip_size_kb": 64, 00:38:04.771 "state": "configuring", 00:38:04.771 "raid_level": "raid0", 00:38:04.771 "superblock": false, 00:38:04.771 "num_base_bdevs": 4, 00:38:04.771 "num_base_bdevs_discovered": 3, 00:38:04.771 "num_base_bdevs_operational": 4, 00:38:04.771 "base_bdevs_list": [ 00:38:04.771 { 00:38:04.771 "name": "BaseBdev1", 00:38:04.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.771 "is_configured": false, 00:38:04.771 "data_offset": 0, 00:38:04.771 "data_size": 0 00:38:04.771 }, 00:38:04.771 { 00:38:04.771 "name": "BaseBdev2", 00:38:04.771 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:04.772 "is_configured": true, 00:38:04.772 "data_offset": 0, 00:38:04.772 "data_size": 65536 00:38:04.772 }, 00:38:04.772 { 00:38:04.772 "name": "BaseBdev3", 00:38:04.772 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:04.772 "is_configured": true, 00:38:04.772 "data_offset": 0, 00:38:04.772 "data_size": 65536 00:38:04.772 }, 00:38:04.772 { 00:38:04.772 "name": "BaseBdev4", 00:38:04.772 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:04.772 "is_configured": true, 00:38:04.772 "data_offset": 0, 00:38:04.772 "data_size": 65536 00:38:04.772 } 00:38:04.772 ] 00:38:04.772 }' 00:38:04.772 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:04.772 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.337 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:38:05.337 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.338 [2024-12-09 05:28:52.181383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:05.338 "name": "Existed_Raid", 00:38:05.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:05.338 "strip_size_kb": 64, 00:38:05.338 "state": "configuring", 00:38:05.338 "raid_level": "raid0", 00:38:05.338 "superblock": false, 00:38:05.338 "num_base_bdevs": 4, 00:38:05.338 "num_base_bdevs_discovered": 2, 00:38:05.338 "num_base_bdevs_operational": 4, 00:38:05.338 "base_bdevs_list": [ 00:38:05.338 { 00:38:05.338 "name": "BaseBdev1", 00:38:05.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:05.338 "is_configured": false, 00:38:05.338 "data_offset": 0, 00:38:05.338 "data_size": 0 00:38:05.338 }, 00:38:05.338 { 00:38:05.338 "name": null, 00:38:05.338 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:05.338 "is_configured": false, 00:38:05.338 "data_offset": 0, 00:38:05.338 "data_size": 65536 00:38:05.338 }, 00:38:05.338 { 00:38:05.338 "name": "BaseBdev3", 00:38:05.338 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:05.338 "is_configured": true, 00:38:05.338 "data_offset": 0, 00:38:05.338 "data_size": 65536 00:38:05.338 }, 00:38:05.338 { 00:38:05.338 "name": "BaseBdev4", 00:38:05.338 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:05.338 "is_configured": true, 00:38:05.338 "data_offset": 0, 00:38:05.338 "data_size": 65536 00:38:05.338 } 00:38:05.338 ] 00:38:05.338 }' 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:05.338 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.904 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:38:05.904 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:05.904 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.905 [2024-12-09 05:28:52.794702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:05.905 BaseBdev1 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.905 [ 00:38:05.905 { 00:38:05.905 "name": "BaseBdev1", 00:38:05.905 "aliases": [ 00:38:05.905 "24b319f2-c4f6-43d0-b1a7-d5c376823602" 00:38:05.905 ], 00:38:05.905 "product_name": "Malloc disk", 00:38:05.905 "block_size": 512, 00:38:05.905 "num_blocks": 65536, 00:38:05.905 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:05.905 "assigned_rate_limits": { 00:38:05.905 "rw_ios_per_sec": 0, 00:38:05.905 "rw_mbytes_per_sec": 0, 00:38:05.905 "r_mbytes_per_sec": 0, 00:38:05.905 "w_mbytes_per_sec": 0 00:38:05.905 }, 00:38:05.905 "claimed": true, 00:38:05.905 "claim_type": "exclusive_write", 00:38:05.905 "zoned": false, 00:38:05.905 "supported_io_types": { 00:38:05.905 "read": true, 00:38:05.905 "write": true, 00:38:05.905 "unmap": true, 00:38:05.905 "flush": true, 00:38:05.905 "reset": true, 00:38:05.905 "nvme_admin": false, 00:38:05.905 "nvme_io": false, 00:38:05.905 "nvme_io_md": false, 00:38:05.905 "write_zeroes": true, 00:38:05.905 "zcopy": true, 00:38:05.905 "get_zone_info": false, 00:38:05.905 "zone_management": false, 00:38:05.905 "zone_append": false, 00:38:05.905 "compare": false, 00:38:05.905 "compare_and_write": false, 00:38:05.905 "abort": true, 00:38:05.905 "seek_hole": false, 00:38:05.905 "seek_data": false, 00:38:05.905 "copy": true, 00:38:05.905 "nvme_iov_md": false 00:38:05.905 }, 00:38:05.905 "memory_domains": [ 00:38:05.905 { 00:38:05.905 "dma_device_id": "system", 00:38:05.905 "dma_device_type": 1 00:38:05.905 }, 00:38:05.905 { 00:38:05.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:05.905 "dma_device_type": 2 00:38:05.905 } 00:38:05.905 ], 00:38:05.905 "driver_specific": {} 00:38:05.905 } 00:38:05.905 ] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.905 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.164 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:06.164 "name": "Existed_Raid", 00:38:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.164 "strip_size_kb": 64, 00:38:06.164 "state": "configuring", 00:38:06.164 "raid_level": "raid0", 00:38:06.164 "superblock": false, 00:38:06.164 "num_base_bdevs": 4, 00:38:06.164 "num_base_bdevs_discovered": 3, 00:38:06.164 "num_base_bdevs_operational": 4, 00:38:06.164 "base_bdevs_list": [ 00:38:06.164 { 00:38:06.164 "name": "BaseBdev1", 00:38:06.164 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:06.164 "is_configured": true, 00:38:06.164 "data_offset": 0, 00:38:06.164 "data_size": 65536 00:38:06.164 }, 00:38:06.164 { 00:38:06.164 "name": null, 00:38:06.164 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:06.164 "is_configured": false, 00:38:06.164 "data_offset": 0, 00:38:06.164 "data_size": 65536 00:38:06.164 }, 00:38:06.164 { 00:38:06.164 "name": "BaseBdev3", 00:38:06.164 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:06.164 "is_configured": true, 00:38:06.164 "data_offset": 0, 00:38:06.164 "data_size": 65536 00:38:06.164 }, 00:38:06.164 { 00:38:06.164 "name": "BaseBdev4", 00:38:06.164 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:06.164 "is_configured": true, 00:38:06.164 "data_offset": 0, 00:38:06.164 "data_size": 65536 00:38:06.164 } 00:38:06.164 ] 00:38:06.164 }' 00:38:06.164 05:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:06.164 05:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.423 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.423 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:38:06.423 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:06.681 [2024-12-09 05:28:53.403045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:06.681 "name": "Existed_Raid", 00:38:06.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.681 "strip_size_kb": 64, 00:38:06.681 "state": "configuring", 00:38:06.681 "raid_level": "raid0", 00:38:06.681 "superblock": false, 00:38:06.681 "num_base_bdevs": 4, 00:38:06.681 "num_base_bdevs_discovered": 2, 00:38:06.681 "num_base_bdevs_operational": 4, 00:38:06.681 "base_bdevs_list": [ 00:38:06.681 { 00:38:06.681 "name": "BaseBdev1", 00:38:06.681 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:06.681 "is_configured": true, 00:38:06.681 "data_offset": 0, 00:38:06.681 "data_size": 65536 00:38:06.681 }, 00:38:06.681 { 00:38:06.681 "name": null, 00:38:06.681 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:06.681 "is_configured": false, 00:38:06.681 "data_offset": 0, 00:38:06.681 "data_size": 65536 00:38:06.681 }, 00:38:06.681 { 00:38:06.681 "name": null, 00:38:06.681 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:06.681 "is_configured": false, 00:38:06.681 "data_offset": 0, 00:38:06.681 "data_size": 65536 00:38:06.681 }, 00:38:06.681 { 00:38:06.681 "name": "BaseBdev4", 00:38:06.681 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:06.681 "is_configured": true, 00:38:06.681 "data_offset": 0, 00:38:06.681 "data_size": 65536 00:38:06.681 } 00:38:06.681 ] 00:38:06.681 }' 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:06.681 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.248 [2024-12-09 05:28:53.979655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.248 05:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:07.248 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.248 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:07.248 "name": "Existed_Raid", 00:38:07.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.248 "strip_size_kb": 64, 00:38:07.248 "state": "configuring", 00:38:07.248 "raid_level": "raid0", 00:38:07.248 "superblock": false, 00:38:07.248 "num_base_bdevs": 4, 00:38:07.248 "num_base_bdevs_discovered": 3, 00:38:07.248 "num_base_bdevs_operational": 4, 00:38:07.248 "base_bdevs_list": [ 00:38:07.248 { 00:38:07.248 "name": "BaseBdev1", 00:38:07.248 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:07.248 "is_configured": true, 00:38:07.248 "data_offset": 0, 00:38:07.248 "data_size": 65536 00:38:07.248 }, 00:38:07.248 { 00:38:07.248 "name": null, 00:38:07.248 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:07.248 "is_configured": false, 00:38:07.248 "data_offset": 0, 00:38:07.248 "data_size": 65536 00:38:07.248 }, 00:38:07.248 { 00:38:07.248 "name": "BaseBdev3", 00:38:07.248 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:07.248 "is_configured": true, 00:38:07.248 "data_offset": 0, 00:38:07.248 "data_size": 65536 00:38:07.248 }, 00:38:07.248 { 00:38:07.248 "name": "BaseBdev4", 00:38:07.248 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:07.248 "is_configured": true, 00:38:07.248 "data_offset": 0, 00:38:07.248 "data_size": 65536 00:38:07.248 } 00:38:07.248 ] 00:38:07.248 }' 00:38:07.248 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:07.248 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:07.815 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.816 [2024-12-09 05:28:54.559876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:07.816 "name": "Existed_Raid", 00:38:07.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.816 "strip_size_kb": 64, 00:38:07.816 "state": "configuring", 00:38:07.816 "raid_level": "raid0", 00:38:07.816 "superblock": false, 00:38:07.816 "num_base_bdevs": 4, 00:38:07.816 "num_base_bdevs_discovered": 2, 00:38:07.816 "num_base_bdevs_operational": 4, 00:38:07.816 "base_bdevs_list": [ 00:38:07.816 { 00:38:07.816 "name": null, 00:38:07.816 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:07.816 "is_configured": false, 00:38:07.816 "data_offset": 0, 00:38:07.816 "data_size": 65536 00:38:07.816 }, 00:38:07.816 { 00:38:07.816 "name": null, 00:38:07.816 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:07.816 "is_configured": false, 00:38:07.816 "data_offset": 0, 00:38:07.816 "data_size": 65536 00:38:07.816 }, 00:38:07.816 { 00:38:07.816 "name": "BaseBdev3", 00:38:07.816 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:07.816 "is_configured": true, 00:38:07.816 "data_offset": 0, 00:38:07.816 "data_size": 65536 00:38:07.816 }, 00:38:07.816 { 00:38:07.816 "name": "BaseBdev4", 00:38:07.816 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:07.816 "is_configured": true, 00:38:07.816 "data_offset": 0, 00:38:07.816 "data_size": 65536 00:38:07.816 } 00:38:07.816 ] 00:38:07.816 }' 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:07.816 05:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.382 [2024-12-09 05:28:55.213912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:08.382 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:08.383 "name": "Existed_Raid", 00:38:08.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:08.383 "strip_size_kb": 64, 00:38:08.383 "state": "configuring", 00:38:08.383 "raid_level": "raid0", 00:38:08.383 "superblock": false, 00:38:08.383 "num_base_bdevs": 4, 00:38:08.383 "num_base_bdevs_discovered": 3, 00:38:08.383 "num_base_bdevs_operational": 4, 00:38:08.383 "base_bdevs_list": [ 00:38:08.383 { 00:38:08.383 "name": null, 00:38:08.383 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:08.383 "is_configured": false, 00:38:08.383 "data_offset": 0, 00:38:08.383 "data_size": 65536 00:38:08.383 }, 00:38:08.383 { 00:38:08.383 "name": "BaseBdev2", 00:38:08.383 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:08.383 "is_configured": true, 00:38:08.383 "data_offset": 0, 00:38:08.383 "data_size": 65536 00:38:08.383 }, 00:38:08.383 { 00:38:08.383 "name": "BaseBdev3", 00:38:08.383 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:08.383 "is_configured": true, 00:38:08.383 "data_offset": 0, 00:38:08.383 "data_size": 65536 00:38:08.383 }, 00:38:08.383 { 00:38:08.383 "name": "BaseBdev4", 00:38:08.383 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:08.383 "is_configured": true, 00:38:08.383 "data_offset": 0, 00:38:08.383 "data_size": 65536 00:38:08.383 } 00:38:08.383 ] 00:38:08.383 }' 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:08.383 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 24b319f2-c4f6-43d0-b1a7-d5c376823602 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.948 [2024-12-09 05:28:55.888110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:38:08.948 [2024-12-09 05:28:55.888168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:38:08.948 [2024-12-09 05:28:55.888180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:38:08.948 [2024-12-09 05:28:55.888482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:38:08.948 [2024-12-09 05:28:55.888651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:38:08.948 [2024-12-09 05:28:55.888669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:38:08.948 [2024-12-09 05:28:55.888954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:08.948 NewBaseBdev 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.948 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.948 [ 00:38:08.948 { 00:38:08.948 "name": "NewBaseBdev", 00:38:08.948 "aliases": [ 00:38:08.948 "24b319f2-c4f6-43d0-b1a7-d5c376823602" 00:38:08.948 ], 00:38:08.948 "product_name": "Malloc disk", 00:38:08.948 "block_size": 512, 00:38:08.948 "num_blocks": 65536, 00:38:08.948 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:08.948 "assigned_rate_limits": { 00:38:08.948 "rw_ios_per_sec": 0, 00:38:08.948 "rw_mbytes_per_sec": 0, 00:38:08.948 "r_mbytes_per_sec": 0, 00:38:08.948 "w_mbytes_per_sec": 0 00:38:08.948 }, 00:38:08.948 "claimed": true, 00:38:08.948 "claim_type": "exclusive_write", 00:38:08.948 "zoned": false, 00:38:08.948 "supported_io_types": { 00:38:08.948 "read": true, 00:38:08.948 "write": true, 00:38:09.207 "unmap": true, 00:38:09.207 "flush": true, 00:38:09.207 "reset": true, 00:38:09.207 "nvme_admin": false, 00:38:09.207 "nvme_io": false, 00:38:09.207 "nvme_io_md": false, 00:38:09.207 "write_zeroes": true, 00:38:09.207 "zcopy": true, 00:38:09.207 "get_zone_info": false, 00:38:09.207 "zone_management": false, 00:38:09.207 "zone_append": false, 00:38:09.207 "compare": false, 00:38:09.207 "compare_and_write": false, 00:38:09.207 "abort": true, 00:38:09.207 "seek_hole": false, 00:38:09.207 "seek_data": false, 00:38:09.207 "copy": true, 00:38:09.207 "nvme_iov_md": false 00:38:09.207 }, 00:38:09.207 "memory_domains": [ 00:38:09.207 { 00:38:09.207 "dma_device_id": "system", 00:38:09.207 "dma_device_type": 1 00:38:09.207 }, 00:38:09.207 { 00:38:09.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.207 "dma_device_type": 2 00:38:09.207 } 00:38:09.207 ], 00:38:09.207 "driver_specific": {} 00:38:09.207 } 00:38:09.207 ] 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:09.207 "name": "Existed_Raid", 00:38:09.207 "uuid": "9274bec0-a006-4a4c-aef5-fd37968cd724", 00:38:09.207 "strip_size_kb": 64, 00:38:09.207 "state": "online", 00:38:09.207 "raid_level": "raid0", 00:38:09.207 "superblock": false, 00:38:09.207 "num_base_bdevs": 4, 00:38:09.207 "num_base_bdevs_discovered": 4, 00:38:09.207 "num_base_bdevs_operational": 4, 00:38:09.207 "base_bdevs_list": [ 00:38:09.207 { 00:38:09.207 "name": "NewBaseBdev", 00:38:09.207 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:09.207 "is_configured": true, 00:38:09.207 "data_offset": 0, 00:38:09.207 "data_size": 65536 00:38:09.207 }, 00:38:09.207 { 00:38:09.207 "name": "BaseBdev2", 00:38:09.207 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:09.207 "is_configured": true, 00:38:09.207 "data_offset": 0, 00:38:09.207 "data_size": 65536 00:38:09.207 }, 00:38:09.207 { 00:38:09.207 "name": "BaseBdev3", 00:38:09.207 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:09.207 "is_configured": true, 00:38:09.207 "data_offset": 0, 00:38:09.207 "data_size": 65536 00:38:09.207 }, 00:38:09.207 { 00:38:09.207 "name": "BaseBdev4", 00:38:09.207 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:09.207 "is_configured": true, 00:38:09.207 "data_offset": 0, 00:38:09.207 "data_size": 65536 00:38:09.207 } 00:38:09.207 ] 00:38:09.207 }' 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:09.207 05:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:09.774 [2024-12-09 05:28:56.476758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.774 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:09.774 "name": "Existed_Raid", 00:38:09.774 "aliases": [ 00:38:09.774 "9274bec0-a006-4a4c-aef5-fd37968cd724" 00:38:09.774 ], 00:38:09.774 "product_name": "Raid Volume", 00:38:09.774 "block_size": 512, 00:38:09.774 "num_blocks": 262144, 00:38:09.774 "uuid": "9274bec0-a006-4a4c-aef5-fd37968cd724", 00:38:09.774 "assigned_rate_limits": { 00:38:09.774 "rw_ios_per_sec": 0, 00:38:09.774 "rw_mbytes_per_sec": 0, 00:38:09.774 "r_mbytes_per_sec": 0, 00:38:09.774 "w_mbytes_per_sec": 0 00:38:09.774 }, 00:38:09.774 "claimed": false, 00:38:09.774 "zoned": false, 00:38:09.774 "supported_io_types": { 00:38:09.774 "read": true, 00:38:09.775 "write": true, 00:38:09.775 "unmap": true, 00:38:09.775 "flush": true, 00:38:09.775 "reset": true, 00:38:09.775 "nvme_admin": false, 00:38:09.775 "nvme_io": false, 00:38:09.775 "nvme_io_md": false, 00:38:09.775 "write_zeroes": true, 00:38:09.775 "zcopy": false, 00:38:09.775 "get_zone_info": false, 00:38:09.775 "zone_management": false, 00:38:09.775 "zone_append": false, 00:38:09.775 "compare": false, 00:38:09.775 "compare_and_write": false, 00:38:09.775 "abort": false, 00:38:09.775 "seek_hole": false, 00:38:09.775 "seek_data": false, 00:38:09.775 "copy": false, 00:38:09.775 "nvme_iov_md": false 00:38:09.775 }, 00:38:09.775 "memory_domains": [ 00:38:09.775 { 00:38:09.775 "dma_device_id": "system", 00:38:09.775 "dma_device_type": 1 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.775 "dma_device_type": 2 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "system", 00:38:09.775 "dma_device_type": 1 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.775 "dma_device_type": 2 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "system", 00:38:09.775 "dma_device_type": 1 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.775 "dma_device_type": 2 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "system", 00:38:09.775 "dma_device_type": 1 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.775 "dma_device_type": 2 00:38:09.775 } 00:38:09.775 ], 00:38:09.775 "driver_specific": { 00:38:09.775 "raid": { 00:38:09.775 "uuid": "9274bec0-a006-4a4c-aef5-fd37968cd724", 00:38:09.775 "strip_size_kb": 64, 00:38:09.775 "state": "online", 00:38:09.775 "raid_level": "raid0", 00:38:09.775 "superblock": false, 00:38:09.775 "num_base_bdevs": 4, 00:38:09.775 "num_base_bdevs_discovered": 4, 00:38:09.775 "num_base_bdevs_operational": 4, 00:38:09.775 "base_bdevs_list": [ 00:38:09.775 { 00:38:09.775 "name": "NewBaseBdev", 00:38:09.775 "uuid": "24b319f2-c4f6-43d0-b1a7-d5c376823602", 00:38:09.775 "is_configured": true, 00:38:09.775 "data_offset": 0, 00:38:09.775 "data_size": 65536 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "name": "BaseBdev2", 00:38:09.775 "uuid": "b77bfb56-fb86-4ab1-88b6-2a9a80fa6ec7", 00:38:09.775 "is_configured": true, 00:38:09.775 "data_offset": 0, 00:38:09.775 "data_size": 65536 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "name": "BaseBdev3", 00:38:09.775 "uuid": "e25f588d-3d70-487f-8251-f5406c7ae01a", 00:38:09.775 "is_configured": true, 00:38:09.775 "data_offset": 0, 00:38:09.775 "data_size": 65536 00:38:09.775 }, 00:38:09.775 { 00:38:09.775 "name": "BaseBdev4", 00:38:09.775 "uuid": "a8df8e1e-24f1-4c77-addd-0bea74dd94bd", 00:38:09.775 "is_configured": true, 00:38:09.775 "data_offset": 0, 00:38:09.775 "data_size": 65536 00:38:09.775 } 00:38:09.775 ] 00:38:09.775 } 00:38:09.775 } 00:38:09.775 }' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:38:09.775 BaseBdev2 00:38:09.775 BaseBdev3 00:38:09.775 BaseBdev4' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.775 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:10.034 [2024-12-09 05:28:56.848401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:10.034 [2024-12-09 05:28:56.848435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:10.034 [2024-12-09 05:28:56.848517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:10.034 [2024-12-09 05:28:56.848605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:10.034 [2024-12-09 05:28:56.848621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69517 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69517 ']' 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69517 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69517 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:10.034 killing process with pid 69517 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69517' 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69517 00:38:10.034 [2024-12-09 05:28:56.886331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:10.034 05:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69517 00:38:10.293 [2024-12-09 05:28:57.193964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:38:11.691 00:38:11.691 real 0m13.215s 00:38:11.691 user 0m21.786s 00:38:11.691 sys 0m1.967s 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:11.691 ************************************ 00:38:11.691 END TEST raid_state_function_test 00:38:11.691 ************************************ 00:38:11.691 05:28:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:38:11.691 05:28:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:11.691 05:28:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.691 05:28:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:11.691 ************************************ 00:38:11.691 START TEST raid_state_function_test_sb 00:38:11.691 ************************************ 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70206 00:38:11.691 Process raid pid: 70206 00:38:11.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70206' 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70206 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70206 ']' 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.691 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:11.691 [2024-12-09 05:28:58.511425] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:11.691 [2024-12-09 05:28:58.511641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:11.948 [2024-12-09 05:28:58.713808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.948 [2024-12-09 05:28:58.873830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.206 [2024-12-09 05:28:59.084562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:12.206 [2024-12-09 05:28:59.084608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:12.773 [2024-12-09 05:28:59.491998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:12.773 [2024-12-09 05:28:59.492309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:12.773 [2024-12-09 05:28:59.492448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:12.773 [2024-12-09 05:28:59.492508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:12.773 [2024-12-09 05:28:59.492527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:12.773 [2024-12-09 05:28:59.492544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:12.773 [2024-12-09 05:28:59.492553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:12.773 [2024-12-09 05:28:59.492567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:12.773 "name": "Existed_Raid", 00:38:12.773 "uuid": "2bc20750-4ee2-4412-8523-f9d98c215e04", 00:38:12.773 "strip_size_kb": 64, 00:38:12.773 "state": "configuring", 00:38:12.773 "raid_level": "raid0", 00:38:12.773 "superblock": true, 00:38:12.773 "num_base_bdevs": 4, 00:38:12.773 "num_base_bdevs_discovered": 0, 00:38:12.773 "num_base_bdevs_operational": 4, 00:38:12.773 "base_bdevs_list": [ 00:38:12.773 { 00:38:12.773 "name": "BaseBdev1", 00:38:12.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.773 "is_configured": false, 00:38:12.773 "data_offset": 0, 00:38:12.773 "data_size": 0 00:38:12.773 }, 00:38:12.773 { 00:38:12.773 "name": "BaseBdev2", 00:38:12.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.773 "is_configured": false, 00:38:12.773 "data_offset": 0, 00:38:12.773 "data_size": 0 00:38:12.773 }, 00:38:12.773 { 00:38:12.773 "name": "BaseBdev3", 00:38:12.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.773 "is_configured": false, 00:38:12.773 "data_offset": 0, 00:38:12.773 "data_size": 0 00:38:12.773 }, 00:38:12.773 { 00:38:12.773 "name": "BaseBdev4", 00:38:12.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.773 "is_configured": false, 00:38:12.773 "data_offset": 0, 00:38:12.773 "data_size": 0 00:38:12.773 } 00:38:12.773 ] 00:38:12.773 }' 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:12.773 05:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.341 [2024-12-09 05:29:00.012074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:13.341 [2024-12-09 05:29:00.012137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.341 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.341 [2024-12-09 05:29:00.020267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:13.341 [2024-12-09 05:29:00.020496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:13.341 [2024-12-09 05:29:00.020614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:13.341 [2024-12-09 05:29:00.020674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:13.342 [2024-12-09 05:29:00.020911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:13.342 [2024-12-09 05:29:00.020956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:13.342 [2024-12-09 05:29:00.020968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:13.342 [2024-12-09 05:29:00.020984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.342 [2024-12-09 05:29:00.066687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:13.342 BaseBdev1 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.342 [ 00:38:13.342 { 00:38:13.342 "name": "BaseBdev1", 00:38:13.342 "aliases": [ 00:38:13.342 "113e3711-978b-4689-9be4-60fa2e29dd03" 00:38:13.342 ], 00:38:13.342 "product_name": "Malloc disk", 00:38:13.342 "block_size": 512, 00:38:13.342 "num_blocks": 65536, 00:38:13.342 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:13.342 "assigned_rate_limits": { 00:38:13.342 "rw_ios_per_sec": 0, 00:38:13.342 "rw_mbytes_per_sec": 0, 00:38:13.342 "r_mbytes_per_sec": 0, 00:38:13.342 "w_mbytes_per_sec": 0 00:38:13.342 }, 00:38:13.342 "claimed": true, 00:38:13.342 "claim_type": "exclusive_write", 00:38:13.342 "zoned": false, 00:38:13.342 "supported_io_types": { 00:38:13.342 "read": true, 00:38:13.342 "write": true, 00:38:13.342 "unmap": true, 00:38:13.342 "flush": true, 00:38:13.342 "reset": true, 00:38:13.342 "nvme_admin": false, 00:38:13.342 "nvme_io": false, 00:38:13.342 "nvme_io_md": false, 00:38:13.342 "write_zeroes": true, 00:38:13.342 "zcopy": true, 00:38:13.342 "get_zone_info": false, 00:38:13.342 "zone_management": false, 00:38:13.342 "zone_append": false, 00:38:13.342 "compare": false, 00:38:13.342 "compare_and_write": false, 00:38:13.342 "abort": true, 00:38:13.342 "seek_hole": false, 00:38:13.342 "seek_data": false, 00:38:13.342 "copy": true, 00:38:13.342 "nvme_iov_md": false 00:38:13.342 }, 00:38:13.342 "memory_domains": [ 00:38:13.342 { 00:38:13.342 "dma_device_id": "system", 00:38:13.342 "dma_device_type": 1 00:38:13.342 }, 00:38:13.342 { 00:38:13.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:13.342 "dma_device_type": 2 00:38:13.342 } 00:38:13.342 ], 00:38:13.342 "driver_specific": {} 00:38:13.342 } 00:38:13.342 ] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.342 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:13.342 "name": "Existed_Raid", 00:38:13.342 "uuid": "d47c7b60-df49-45e5-bcd3-adc8bd06fab0", 00:38:13.342 "strip_size_kb": 64, 00:38:13.342 "state": "configuring", 00:38:13.342 "raid_level": "raid0", 00:38:13.342 "superblock": true, 00:38:13.342 "num_base_bdevs": 4, 00:38:13.342 "num_base_bdevs_discovered": 1, 00:38:13.342 "num_base_bdevs_operational": 4, 00:38:13.342 "base_bdevs_list": [ 00:38:13.342 { 00:38:13.342 "name": "BaseBdev1", 00:38:13.342 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:13.342 "is_configured": true, 00:38:13.342 "data_offset": 2048, 00:38:13.342 "data_size": 63488 00:38:13.342 }, 00:38:13.342 { 00:38:13.342 "name": "BaseBdev2", 00:38:13.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.342 "is_configured": false, 00:38:13.342 "data_offset": 0, 00:38:13.342 "data_size": 0 00:38:13.342 }, 00:38:13.342 { 00:38:13.342 "name": "BaseBdev3", 00:38:13.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.342 "is_configured": false, 00:38:13.343 "data_offset": 0, 00:38:13.343 "data_size": 0 00:38:13.343 }, 00:38:13.343 { 00:38:13.343 "name": "BaseBdev4", 00:38:13.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.343 "is_configured": false, 00:38:13.343 "data_offset": 0, 00:38:13.343 "data_size": 0 00:38:13.343 } 00:38:13.343 ] 00:38:13.343 }' 00:38:13.343 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:13.343 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.910 [2024-12-09 05:29:00.642925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:13.910 [2024-12-09 05:29:00.642989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.910 [2024-12-09 05:29:00.650975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:13.910 [2024-12-09 05:29:00.653490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:13.910 [2024-12-09 05:29:00.653692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:13.910 [2024-12-09 05:29:00.653883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:13.910 [2024-12-09 05:29:00.653920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:13.910 [2024-12-09 05:29:00.653933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:13.910 [2024-12-09 05:29:00.653947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:13.910 "name": "Existed_Raid", 00:38:13.910 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:13.910 "strip_size_kb": 64, 00:38:13.910 "state": "configuring", 00:38:13.910 "raid_level": "raid0", 00:38:13.910 "superblock": true, 00:38:13.910 "num_base_bdevs": 4, 00:38:13.910 "num_base_bdevs_discovered": 1, 00:38:13.910 "num_base_bdevs_operational": 4, 00:38:13.910 "base_bdevs_list": [ 00:38:13.910 { 00:38:13.910 "name": "BaseBdev1", 00:38:13.910 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:13.910 "is_configured": true, 00:38:13.910 "data_offset": 2048, 00:38:13.910 "data_size": 63488 00:38:13.910 }, 00:38:13.910 { 00:38:13.910 "name": "BaseBdev2", 00:38:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.910 "is_configured": false, 00:38:13.910 "data_offset": 0, 00:38:13.910 "data_size": 0 00:38:13.910 }, 00:38:13.910 { 00:38:13.910 "name": "BaseBdev3", 00:38:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.910 "is_configured": false, 00:38:13.910 "data_offset": 0, 00:38:13.910 "data_size": 0 00:38:13.910 }, 00:38:13.910 { 00:38:13.910 "name": "BaseBdev4", 00:38:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.910 "is_configured": false, 00:38:13.910 "data_offset": 0, 00:38:13.910 "data_size": 0 00:38:13.910 } 00:38:13.910 ] 00:38:13.910 }' 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:13.910 05:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:14.478 [2024-12-09 05:29:01.217327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:14.478 BaseBdev2 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:14.478 [ 00:38:14.478 { 00:38:14.478 "name": "BaseBdev2", 00:38:14.478 "aliases": [ 00:38:14.478 "20629702-fb20-43fd-9942-6233d1d84d59" 00:38:14.478 ], 00:38:14.478 "product_name": "Malloc disk", 00:38:14.478 "block_size": 512, 00:38:14.478 "num_blocks": 65536, 00:38:14.478 "uuid": "20629702-fb20-43fd-9942-6233d1d84d59", 00:38:14.478 "assigned_rate_limits": { 00:38:14.478 "rw_ios_per_sec": 0, 00:38:14.478 "rw_mbytes_per_sec": 0, 00:38:14.478 "r_mbytes_per_sec": 0, 00:38:14.478 "w_mbytes_per_sec": 0 00:38:14.478 }, 00:38:14.478 "claimed": true, 00:38:14.478 "claim_type": "exclusive_write", 00:38:14.478 "zoned": false, 00:38:14.478 "supported_io_types": { 00:38:14.478 "read": true, 00:38:14.478 "write": true, 00:38:14.478 "unmap": true, 00:38:14.478 "flush": true, 00:38:14.478 "reset": true, 00:38:14.478 "nvme_admin": false, 00:38:14.478 "nvme_io": false, 00:38:14.478 "nvme_io_md": false, 00:38:14.478 "write_zeroes": true, 00:38:14.478 "zcopy": true, 00:38:14.478 "get_zone_info": false, 00:38:14.478 "zone_management": false, 00:38:14.478 "zone_append": false, 00:38:14.478 "compare": false, 00:38:14.478 "compare_and_write": false, 00:38:14.478 "abort": true, 00:38:14.478 "seek_hole": false, 00:38:14.478 "seek_data": false, 00:38:14.478 "copy": true, 00:38:14.478 "nvme_iov_md": false 00:38:14.478 }, 00:38:14.478 "memory_domains": [ 00:38:14.478 { 00:38:14.478 "dma_device_id": "system", 00:38:14.478 "dma_device_type": 1 00:38:14.478 }, 00:38:14.478 { 00:38:14.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:14.478 "dma_device_type": 2 00:38:14.478 } 00:38:14.478 ], 00:38:14.478 "driver_specific": {} 00:38:14.478 } 00:38:14.478 ] 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:14.478 "name": "Existed_Raid", 00:38:14.478 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:14.478 "strip_size_kb": 64, 00:38:14.478 "state": "configuring", 00:38:14.478 "raid_level": "raid0", 00:38:14.478 "superblock": true, 00:38:14.478 "num_base_bdevs": 4, 00:38:14.478 "num_base_bdevs_discovered": 2, 00:38:14.478 "num_base_bdevs_operational": 4, 00:38:14.478 "base_bdevs_list": [ 00:38:14.478 { 00:38:14.478 "name": "BaseBdev1", 00:38:14.478 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:14.478 "is_configured": true, 00:38:14.478 "data_offset": 2048, 00:38:14.478 "data_size": 63488 00:38:14.478 }, 00:38:14.478 { 00:38:14.478 "name": "BaseBdev2", 00:38:14.478 "uuid": "20629702-fb20-43fd-9942-6233d1d84d59", 00:38:14.478 "is_configured": true, 00:38:14.478 "data_offset": 2048, 00:38:14.478 "data_size": 63488 00:38:14.478 }, 00:38:14.478 { 00:38:14.478 "name": "BaseBdev3", 00:38:14.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.478 "is_configured": false, 00:38:14.478 "data_offset": 0, 00:38:14.478 "data_size": 0 00:38:14.478 }, 00:38:14.478 { 00:38:14.478 "name": "BaseBdev4", 00:38:14.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.478 "is_configured": false, 00:38:14.478 "data_offset": 0, 00:38:14.478 "data_size": 0 00:38:14.478 } 00:38:14.478 ] 00:38:14.478 }' 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:14.478 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.046 BaseBdev3 00:38:15.046 [2024-12-09 05:29:01.820197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.046 [ 00:38:15.046 { 00:38:15.046 "name": "BaseBdev3", 00:38:15.046 "aliases": [ 00:38:15.046 "4d981e1b-c081-45be-b4e6-014d114962b3" 00:38:15.046 ], 00:38:15.046 "product_name": "Malloc disk", 00:38:15.046 "block_size": 512, 00:38:15.046 "num_blocks": 65536, 00:38:15.046 "uuid": "4d981e1b-c081-45be-b4e6-014d114962b3", 00:38:15.046 "assigned_rate_limits": { 00:38:15.046 "rw_ios_per_sec": 0, 00:38:15.046 "rw_mbytes_per_sec": 0, 00:38:15.046 "r_mbytes_per_sec": 0, 00:38:15.046 "w_mbytes_per_sec": 0 00:38:15.046 }, 00:38:15.046 "claimed": true, 00:38:15.046 "claim_type": "exclusive_write", 00:38:15.046 "zoned": false, 00:38:15.046 "supported_io_types": { 00:38:15.046 "read": true, 00:38:15.046 "write": true, 00:38:15.046 "unmap": true, 00:38:15.046 "flush": true, 00:38:15.046 "reset": true, 00:38:15.046 "nvme_admin": false, 00:38:15.046 "nvme_io": false, 00:38:15.046 "nvme_io_md": false, 00:38:15.046 "write_zeroes": true, 00:38:15.046 "zcopy": true, 00:38:15.046 "get_zone_info": false, 00:38:15.046 "zone_management": false, 00:38:15.046 "zone_append": false, 00:38:15.046 "compare": false, 00:38:15.046 "compare_and_write": false, 00:38:15.046 "abort": true, 00:38:15.046 "seek_hole": false, 00:38:15.046 "seek_data": false, 00:38:15.046 "copy": true, 00:38:15.046 "nvme_iov_md": false 00:38:15.046 }, 00:38:15.046 "memory_domains": [ 00:38:15.046 { 00:38:15.046 "dma_device_id": "system", 00:38:15.046 "dma_device_type": 1 00:38:15.046 }, 00:38:15.046 { 00:38:15.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:15.046 "dma_device_type": 2 00:38:15.046 } 00:38:15.046 ], 00:38:15.046 "driver_specific": {} 00:38:15.046 } 00:38:15.046 ] 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:15.046 "name": "Existed_Raid", 00:38:15.046 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:15.046 "strip_size_kb": 64, 00:38:15.046 "state": "configuring", 00:38:15.046 "raid_level": "raid0", 00:38:15.046 "superblock": true, 00:38:15.046 "num_base_bdevs": 4, 00:38:15.046 "num_base_bdevs_discovered": 3, 00:38:15.046 "num_base_bdevs_operational": 4, 00:38:15.046 "base_bdevs_list": [ 00:38:15.046 { 00:38:15.046 "name": "BaseBdev1", 00:38:15.046 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:15.046 "is_configured": true, 00:38:15.046 "data_offset": 2048, 00:38:15.046 "data_size": 63488 00:38:15.046 }, 00:38:15.046 { 00:38:15.046 "name": "BaseBdev2", 00:38:15.046 "uuid": "20629702-fb20-43fd-9942-6233d1d84d59", 00:38:15.046 "is_configured": true, 00:38:15.046 "data_offset": 2048, 00:38:15.046 "data_size": 63488 00:38:15.046 }, 00:38:15.046 { 00:38:15.046 "name": "BaseBdev3", 00:38:15.046 "uuid": "4d981e1b-c081-45be-b4e6-014d114962b3", 00:38:15.046 "is_configured": true, 00:38:15.046 "data_offset": 2048, 00:38:15.046 "data_size": 63488 00:38:15.046 }, 00:38:15.046 { 00:38:15.046 "name": "BaseBdev4", 00:38:15.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:15.046 "is_configured": false, 00:38:15.046 "data_offset": 0, 00:38:15.046 "data_size": 0 00:38:15.046 } 00:38:15.046 ] 00:38:15.046 }' 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:15.046 05:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.613 [2024-12-09 05:29:02.414020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:15.613 BaseBdev4 00:38:15.613 [2024-12-09 05:29:02.414549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:15.613 [2024-12-09 05:29:02.414574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:15.613 [2024-12-09 05:29:02.414978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:15.613 [2024-12-09 05:29:02.415196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:15.613 [2024-12-09 05:29:02.415221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.613 [2024-12-09 05:29:02.415406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.613 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.613 [ 00:38:15.613 { 00:38:15.613 "name": "BaseBdev4", 00:38:15.613 "aliases": [ 00:38:15.613 "7174dd64-9b11-40d0-bbe6-ed4556bb7113" 00:38:15.613 ], 00:38:15.613 "product_name": "Malloc disk", 00:38:15.613 "block_size": 512, 00:38:15.613 "num_blocks": 65536, 00:38:15.613 "uuid": "7174dd64-9b11-40d0-bbe6-ed4556bb7113", 00:38:15.613 "assigned_rate_limits": { 00:38:15.613 "rw_ios_per_sec": 0, 00:38:15.613 "rw_mbytes_per_sec": 0, 00:38:15.613 "r_mbytes_per_sec": 0, 00:38:15.613 "w_mbytes_per_sec": 0 00:38:15.613 }, 00:38:15.613 "claimed": true, 00:38:15.613 "claim_type": "exclusive_write", 00:38:15.613 "zoned": false, 00:38:15.613 "supported_io_types": { 00:38:15.613 "read": true, 00:38:15.613 "write": true, 00:38:15.613 "unmap": true, 00:38:15.613 "flush": true, 00:38:15.613 "reset": true, 00:38:15.614 "nvme_admin": false, 00:38:15.614 "nvme_io": false, 00:38:15.614 "nvme_io_md": false, 00:38:15.614 "write_zeroes": true, 00:38:15.614 "zcopy": true, 00:38:15.614 "get_zone_info": false, 00:38:15.614 "zone_management": false, 00:38:15.614 "zone_append": false, 00:38:15.614 "compare": false, 00:38:15.614 "compare_and_write": false, 00:38:15.614 "abort": true, 00:38:15.614 "seek_hole": false, 00:38:15.614 "seek_data": false, 00:38:15.614 "copy": true, 00:38:15.614 "nvme_iov_md": false 00:38:15.614 }, 00:38:15.614 "memory_domains": [ 00:38:15.614 { 00:38:15.614 "dma_device_id": "system", 00:38:15.614 "dma_device_type": 1 00:38:15.614 }, 00:38:15.614 { 00:38:15.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:15.614 "dma_device_type": 2 00:38:15.614 } 00:38:15.614 ], 00:38:15.614 "driver_specific": {} 00:38:15.614 } 00:38:15.614 ] 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:15.614 "name": "Existed_Raid", 00:38:15.614 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:15.614 "strip_size_kb": 64, 00:38:15.614 "state": "online", 00:38:15.614 "raid_level": "raid0", 00:38:15.614 "superblock": true, 00:38:15.614 "num_base_bdevs": 4, 00:38:15.614 "num_base_bdevs_discovered": 4, 00:38:15.614 "num_base_bdevs_operational": 4, 00:38:15.614 "base_bdevs_list": [ 00:38:15.614 { 00:38:15.614 "name": "BaseBdev1", 00:38:15.614 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:15.614 "is_configured": true, 00:38:15.614 "data_offset": 2048, 00:38:15.614 "data_size": 63488 00:38:15.614 }, 00:38:15.614 { 00:38:15.614 "name": "BaseBdev2", 00:38:15.614 "uuid": "20629702-fb20-43fd-9942-6233d1d84d59", 00:38:15.614 "is_configured": true, 00:38:15.614 "data_offset": 2048, 00:38:15.614 "data_size": 63488 00:38:15.614 }, 00:38:15.614 { 00:38:15.614 "name": "BaseBdev3", 00:38:15.614 "uuid": "4d981e1b-c081-45be-b4e6-014d114962b3", 00:38:15.614 "is_configured": true, 00:38:15.614 "data_offset": 2048, 00:38:15.614 "data_size": 63488 00:38:15.614 }, 00:38:15.614 { 00:38:15.614 "name": "BaseBdev4", 00:38:15.614 "uuid": "7174dd64-9b11-40d0-bbe6-ed4556bb7113", 00:38:15.614 "is_configured": true, 00:38:15.614 "data_offset": 2048, 00:38:15.614 "data_size": 63488 00:38:15.614 } 00:38:15.614 ] 00:38:15.614 }' 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:15.614 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.181 [2024-12-09 05:29:02.974651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:16.181 05:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:16.181 "name": "Existed_Raid", 00:38:16.181 "aliases": [ 00:38:16.181 "3f16b10a-4bd8-4a5d-ba6a-845496bee28d" 00:38:16.181 ], 00:38:16.181 "product_name": "Raid Volume", 00:38:16.181 "block_size": 512, 00:38:16.181 "num_blocks": 253952, 00:38:16.181 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:16.181 "assigned_rate_limits": { 00:38:16.181 "rw_ios_per_sec": 0, 00:38:16.181 "rw_mbytes_per_sec": 0, 00:38:16.181 "r_mbytes_per_sec": 0, 00:38:16.181 "w_mbytes_per_sec": 0 00:38:16.181 }, 00:38:16.181 "claimed": false, 00:38:16.181 "zoned": false, 00:38:16.181 "supported_io_types": { 00:38:16.181 "read": true, 00:38:16.181 "write": true, 00:38:16.181 "unmap": true, 00:38:16.181 "flush": true, 00:38:16.181 "reset": true, 00:38:16.181 "nvme_admin": false, 00:38:16.181 "nvme_io": false, 00:38:16.181 "nvme_io_md": false, 00:38:16.181 "write_zeroes": true, 00:38:16.181 "zcopy": false, 00:38:16.181 "get_zone_info": false, 00:38:16.181 "zone_management": false, 00:38:16.181 "zone_append": false, 00:38:16.181 "compare": false, 00:38:16.181 "compare_and_write": false, 00:38:16.181 "abort": false, 00:38:16.181 "seek_hole": false, 00:38:16.181 "seek_data": false, 00:38:16.181 "copy": false, 00:38:16.181 "nvme_iov_md": false 00:38:16.181 }, 00:38:16.181 "memory_domains": [ 00:38:16.181 { 00:38:16.181 "dma_device_id": "system", 00:38:16.181 "dma_device_type": 1 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:16.181 "dma_device_type": 2 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "system", 00:38:16.181 "dma_device_type": 1 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:16.181 "dma_device_type": 2 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "system", 00:38:16.181 "dma_device_type": 1 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:16.181 "dma_device_type": 2 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "system", 00:38:16.181 "dma_device_type": 1 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:16.181 "dma_device_type": 2 00:38:16.181 } 00:38:16.181 ], 00:38:16.181 "driver_specific": { 00:38:16.181 "raid": { 00:38:16.181 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:16.181 "strip_size_kb": 64, 00:38:16.181 "state": "online", 00:38:16.181 "raid_level": "raid0", 00:38:16.181 "superblock": true, 00:38:16.181 "num_base_bdevs": 4, 00:38:16.181 "num_base_bdevs_discovered": 4, 00:38:16.181 "num_base_bdevs_operational": 4, 00:38:16.181 "base_bdevs_list": [ 00:38:16.181 { 00:38:16.181 "name": "BaseBdev1", 00:38:16.181 "uuid": "113e3711-978b-4689-9be4-60fa2e29dd03", 00:38:16.181 "is_configured": true, 00:38:16.181 "data_offset": 2048, 00:38:16.181 "data_size": 63488 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "name": "BaseBdev2", 00:38:16.181 "uuid": "20629702-fb20-43fd-9942-6233d1d84d59", 00:38:16.181 "is_configured": true, 00:38:16.181 "data_offset": 2048, 00:38:16.181 "data_size": 63488 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "name": "BaseBdev3", 00:38:16.181 "uuid": "4d981e1b-c081-45be-b4e6-014d114962b3", 00:38:16.181 "is_configured": true, 00:38:16.181 "data_offset": 2048, 00:38:16.181 "data_size": 63488 00:38:16.181 }, 00:38:16.181 { 00:38:16.181 "name": "BaseBdev4", 00:38:16.181 "uuid": "7174dd64-9b11-40d0-bbe6-ed4556bb7113", 00:38:16.181 "is_configured": true, 00:38:16.181 "data_offset": 2048, 00:38:16.181 "data_size": 63488 00:38:16.181 } 00:38:16.181 ] 00:38:16.181 } 00:38:16.181 } 00:38:16.181 }' 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:16.181 BaseBdev2 00:38:16.181 BaseBdev3 00:38:16.181 BaseBdev4' 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.181 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.439 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.439 [2024-12-09 05:29:03.342407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:16.439 [2024-12-09 05:29:03.342630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:16.439 [2024-12-09 05:29:03.342834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:16.697 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.697 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:16.697 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:38:16.697 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:16.697 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:38:16.697 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:16.698 "name": "Existed_Raid", 00:38:16.698 "uuid": "3f16b10a-4bd8-4a5d-ba6a-845496bee28d", 00:38:16.698 "strip_size_kb": 64, 00:38:16.698 "state": "offline", 00:38:16.698 "raid_level": "raid0", 00:38:16.698 "superblock": true, 00:38:16.698 "num_base_bdevs": 4, 00:38:16.698 "num_base_bdevs_discovered": 3, 00:38:16.698 "num_base_bdevs_operational": 3, 00:38:16.698 "base_bdevs_list": [ 00:38:16.698 { 00:38:16.698 "name": null, 00:38:16.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:16.698 "is_configured": false, 00:38:16.698 "data_offset": 0, 00:38:16.698 "data_size": 63488 00:38:16.698 }, 00:38:16.698 { 00:38:16.698 "name": "BaseBdev2", 00:38:16.698 "uuid": "20629702-fb20-43fd-9942-6233d1d84d59", 00:38:16.698 "is_configured": true, 00:38:16.698 "data_offset": 2048, 00:38:16.698 "data_size": 63488 00:38:16.698 }, 00:38:16.698 { 00:38:16.698 "name": "BaseBdev3", 00:38:16.698 "uuid": "4d981e1b-c081-45be-b4e6-014d114962b3", 00:38:16.698 "is_configured": true, 00:38:16.698 "data_offset": 2048, 00:38:16.698 "data_size": 63488 00:38:16.698 }, 00:38:16.698 { 00:38:16.698 "name": "BaseBdev4", 00:38:16.698 "uuid": "7174dd64-9b11-40d0-bbe6-ed4556bb7113", 00:38:16.698 "is_configured": true, 00:38:16.698 "data_offset": 2048, 00:38:16.698 "data_size": 63488 00:38:16.698 } 00:38:16.698 ] 00:38:16.698 }' 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:16.698 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:17.264 05:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.264 [2024-12-09 05:29:04.017140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.264 [2024-12-09 05:29:04.157695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:17.264 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.523 [2024-12-09 05:29:04.293845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:38:17.523 [2024-12-09 05:29:04.294039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.523 BaseBdev2 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.523 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:17.524 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.524 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.524 [ 00:38:17.524 { 00:38:17.524 "name": "BaseBdev2", 00:38:17.524 "aliases": [ 00:38:17.524 "1d40b49b-2251-423d-a5ae-3ccd89248fa1" 00:38:17.524 ], 00:38:17.524 "product_name": "Malloc disk", 00:38:17.524 "block_size": 512, 00:38:17.524 "num_blocks": 65536, 00:38:17.524 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:17.524 "assigned_rate_limits": { 00:38:17.524 "rw_ios_per_sec": 0, 00:38:17.524 "rw_mbytes_per_sec": 0, 00:38:17.524 "r_mbytes_per_sec": 0, 00:38:17.524 "w_mbytes_per_sec": 0 00:38:17.524 }, 00:38:17.524 "claimed": false, 00:38:17.524 "zoned": false, 00:38:17.524 "supported_io_types": { 00:38:17.524 "read": true, 00:38:17.524 "write": true, 00:38:17.524 "unmap": true, 00:38:17.524 "flush": true, 00:38:17.524 "reset": true, 00:38:17.524 "nvme_admin": false, 00:38:17.524 "nvme_io": false, 00:38:17.524 "nvme_io_md": false, 00:38:17.524 "write_zeroes": true, 00:38:17.524 "zcopy": true, 00:38:17.524 "get_zone_info": false, 00:38:17.524 "zone_management": false, 00:38:17.524 "zone_append": false, 00:38:17.524 "compare": false, 00:38:17.524 "compare_and_write": false, 00:38:17.524 "abort": true, 00:38:17.524 "seek_hole": false, 00:38:17.782 "seek_data": false, 00:38:17.782 "copy": true, 00:38:17.782 "nvme_iov_md": false 00:38:17.782 }, 00:38:17.782 "memory_domains": [ 00:38:17.782 { 00:38:17.782 "dma_device_id": "system", 00:38:17.782 "dma_device_type": 1 00:38:17.782 }, 00:38:17.782 { 00:38:17.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:17.783 "dma_device_type": 2 00:38:17.783 } 00:38:17.783 ], 00:38:17.783 "driver_specific": {} 00:38:17.783 } 00:38:17.783 ] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 BaseBdev3 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 [ 00:38:17.783 { 00:38:17.783 "name": "BaseBdev3", 00:38:17.783 "aliases": [ 00:38:17.783 "06327011-7d65-41b9-98ac-a02f4228b471" 00:38:17.783 ], 00:38:17.783 "product_name": "Malloc disk", 00:38:17.783 "block_size": 512, 00:38:17.783 "num_blocks": 65536, 00:38:17.783 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:17.783 "assigned_rate_limits": { 00:38:17.783 "rw_ios_per_sec": 0, 00:38:17.783 "rw_mbytes_per_sec": 0, 00:38:17.783 "r_mbytes_per_sec": 0, 00:38:17.783 "w_mbytes_per_sec": 0 00:38:17.783 }, 00:38:17.783 "claimed": false, 00:38:17.783 "zoned": false, 00:38:17.783 "supported_io_types": { 00:38:17.783 "read": true, 00:38:17.783 "write": true, 00:38:17.783 "unmap": true, 00:38:17.783 "flush": true, 00:38:17.783 "reset": true, 00:38:17.783 "nvme_admin": false, 00:38:17.783 "nvme_io": false, 00:38:17.783 "nvme_io_md": false, 00:38:17.783 "write_zeroes": true, 00:38:17.783 "zcopy": true, 00:38:17.783 "get_zone_info": false, 00:38:17.783 "zone_management": false, 00:38:17.783 "zone_append": false, 00:38:17.783 "compare": false, 00:38:17.783 "compare_and_write": false, 00:38:17.783 "abort": true, 00:38:17.783 "seek_hole": false, 00:38:17.783 "seek_data": false, 00:38:17.783 "copy": true, 00:38:17.783 "nvme_iov_md": false 00:38:17.783 }, 00:38:17.783 "memory_domains": [ 00:38:17.783 { 00:38:17.783 "dma_device_id": "system", 00:38:17.783 "dma_device_type": 1 00:38:17.783 }, 00:38:17.783 { 00:38:17.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:17.783 "dma_device_type": 2 00:38:17.783 } 00:38:17.783 ], 00:38:17.783 "driver_specific": {} 00:38:17.783 } 00:38:17.783 ] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 BaseBdev4 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 [ 00:38:17.783 { 00:38:17.783 "name": "BaseBdev4", 00:38:17.783 "aliases": [ 00:38:17.783 "47d3dc20-ef14-481c-b3bc-10cd5dcefab1" 00:38:17.783 ], 00:38:17.783 "product_name": "Malloc disk", 00:38:17.783 "block_size": 512, 00:38:17.783 "num_blocks": 65536, 00:38:17.783 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:17.783 "assigned_rate_limits": { 00:38:17.783 "rw_ios_per_sec": 0, 00:38:17.783 "rw_mbytes_per_sec": 0, 00:38:17.783 "r_mbytes_per_sec": 0, 00:38:17.783 "w_mbytes_per_sec": 0 00:38:17.783 }, 00:38:17.783 "claimed": false, 00:38:17.783 "zoned": false, 00:38:17.783 "supported_io_types": { 00:38:17.783 "read": true, 00:38:17.783 "write": true, 00:38:17.783 "unmap": true, 00:38:17.783 "flush": true, 00:38:17.783 "reset": true, 00:38:17.783 "nvme_admin": false, 00:38:17.783 "nvme_io": false, 00:38:17.783 "nvme_io_md": false, 00:38:17.783 "write_zeroes": true, 00:38:17.783 "zcopy": true, 00:38:17.783 "get_zone_info": false, 00:38:17.783 "zone_management": false, 00:38:17.783 "zone_append": false, 00:38:17.783 "compare": false, 00:38:17.783 "compare_and_write": false, 00:38:17.783 "abort": true, 00:38:17.783 "seek_hole": false, 00:38:17.783 "seek_data": false, 00:38:17.783 "copy": true, 00:38:17.783 "nvme_iov_md": false 00:38:17.783 }, 00:38:17.783 "memory_domains": [ 00:38:17.783 { 00:38:17.783 "dma_device_id": "system", 00:38:17.783 "dma_device_type": 1 00:38:17.783 }, 00:38:17.783 { 00:38:17.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:17.783 "dma_device_type": 2 00:38:17.783 } 00:38:17.783 ], 00:38:17.783 "driver_specific": {} 00:38:17.783 } 00:38:17.783 ] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.783 [2024-12-09 05:29:04.648266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:17.783 [2024-12-09 05:29:04.648572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:17.783 [2024-12-09 05:29:04.648634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:17.783 [2024-12-09 05:29:04.651054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:17.783 [2024-12-09 05:29:04.651120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:17.783 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:17.784 "name": "Existed_Raid", 00:38:17.784 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:17.784 "strip_size_kb": 64, 00:38:17.784 "state": "configuring", 00:38:17.784 "raid_level": "raid0", 00:38:17.784 "superblock": true, 00:38:17.784 "num_base_bdevs": 4, 00:38:17.784 "num_base_bdevs_discovered": 3, 00:38:17.784 "num_base_bdevs_operational": 4, 00:38:17.784 "base_bdevs_list": [ 00:38:17.784 { 00:38:17.784 "name": "BaseBdev1", 00:38:17.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:17.784 "is_configured": false, 00:38:17.784 "data_offset": 0, 00:38:17.784 "data_size": 0 00:38:17.784 }, 00:38:17.784 { 00:38:17.784 "name": "BaseBdev2", 00:38:17.784 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:17.784 "is_configured": true, 00:38:17.784 "data_offset": 2048, 00:38:17.784 "data_size": 63488 00:38:17.784 }, 00:38:17.784 { 00:38:17.784 "name": "BaseBdev3", 00:38:17.784 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:17.784 "is_configured": true, 00:38:17.784 "data_offset": 2048, 00:38:17.784 "data_size": 63488 00:38:17.784 }, 00:38:17.784 { 00:38:17.784 "name": "BaseBdev4", 00:38:17.784 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:17.784 "is_configured": true, 00:38:17.784 "data_offset": 2048, 00:38:17.784 "data_size": 63488 00:38:17.784 } 00:38:17.784 ] 00:38:17.784 }' 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:17.784 05:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.354 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:38:18.354 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.354 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.354 [2024-12-09 05:29:05.180469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:18.355 "name": "Existed_Raid", 00:38:18.355 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:18.355 "strip_size_kb": 64, 00:38:18.355 "state": "configuring", 00:38:18.355 "raid_level": "raid0", 00:38:18.355 "superblock": true, 00:38:18.355 "num_base_bdevs": 4, 00:38:18.355 "num_base_bdevs_discovered": 2, 00:38:18.355 "num_base_bdevs_operational": 4, 00:38:18.355 "base_bdevs_list": [ 00:38:18.355 { 00:38:18.355 "name": "BaseBdev1", 00:38:18.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:18.355 "is_configured": false, 00:38:18.355 "data_offset": 0, 00:38:18.355 "data_size": 0 00:38:18.355 }, 00:38:18.355 { 00:38:18.355 "name": null, 00:38:18.355 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:18.355 "is_configured": false, 00:38:18.355 "data_offset": 0, 00:38:18.355 "data_size": 63488 00:38:18.355 }, 00:38:18.355 { 00:38:18.355 "name": "BaseBdev3", 00:38:18.355 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:18.355 "is_configured": true, 00:38:18.355 "data_offset": 2048, 00:38:18.355 "data_size": 63488 00:38:18.355 }, 00:38:18.355 { 00:38:18.355 "name": "BaseBdev4", 00:38:18.355 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:18.355 "is_configured": true, 00:38:18.355 "data_offset": 2048, 00:38:18.355 "data_size": 63488 00:38:18.355 } 00:38:18.355 ] 00:38:18.355 }' 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:18.355 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.921 [2024-12-09 05:29:05.808764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:18.921 BaseBdev1 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.921 [ 00:38:18.921 { 00:38:18.921 "name": "BaseBdev1", 00:38:18.921 "aliases": [ 00:38:18.921 "3b2f693e-512a-4999-9790-eeae5182ed69" 00:38:18.921 ], 00:38:18.921 "product_name": "Malloc disk", 00:38:18.921 "block_size": 512, 00:38:18.921 "num_blocks": 65536, 00:38:18.921 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:18.921 "assigned_rate_limits": { 00:38:18.921 "rw_ios_per_sec": 0, 00:38:18.921 "rw_mbytes_per_sec": 0, 00:38:18.921 "r_mbytes_per_sec": 0, 00:38:18.921 "w_mbytes_per_sec": 0 00:38:18.921 }, 00:38:18.921 "claimed": true, 00:38:18.921 "claim_type": "exclusive_write", 00:38:18.921 "zoned": false, 00:38:18.921 "supported_io_types": { 00:38:18.921 "read": true, 00:38:18.921 "write": true, 00:38:18.921 "unmap": true, 00:38:18.921 "flush": true, 00:38:18.921 "reset": true, 00:38:18.921 "nvme_admin": false, 00:38:18.921 "nvme_io": false, 00:38:18.921 "nvme_io_md": false, 00:38:18.921 "write_zeroes": true, 00:38:18.921 "zcopy": true, 00:38:18.921 "get_zone_info": false, 00:38:18.921 "zone_management": false, 00:38:18.921 "zone_append": false, 00:38:18.921 "compare": false, 00:38:18.921 "compare_and_write": false, 00:38:18.921 "abort": true, 00:38:18.921 "seek_hole": false, 00:38:18.921 "seek_data": false, 00:38:18.921 "copy": true, 00:38:18.921 "nvme_iov_md": false 00:38:18.921 }, 00:38:18.921 "memory_domains": [ 00:38:18.921 { 00:38:18.921 "dma_device_id": "system", 00:38:18.921 "dma_device_type": 1 00:38:18.921 }, 00:38:18.921 { 00:38:18.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:18.921 "dma_device_type": 2 00:38:18.921 } 00:38:18.921 ], 00:38:18.921 "driver_specific": {} 00:38:18.921 } 00:38:18.921 ] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:18.921 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:18.922 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:18.922 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.922 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.922 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.922 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:18.922 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.200 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:19.200 "name": "Existed_Raid", 00:38:19.200 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:19.200 "strip_size_kb": 64, 00:38:19.200 "state": "configuring", 00:38:19.200 "raid_level": "raid0", 00:38:19.200 "superblock": true, 00:38:19.200 "num_base_bdevs": 4, 00:38:19.200 "num_base_bdevs_discovered": 3, 00:38:19.200 "num_base_bdevs_operational": 4, 00:38:19.200 "base_bdevs_list": [ 00:38:19.200 { 00:38:19.200 "name": "BaseBdev1", 00:38:19.200 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:19.200 "is_configured": true, 00:38:19.200 "data_offset": 2048, 00:38:19.200 "data_size": 63488 00:38:19.200 }, 00:38:19.200 { 00:38:19.200 "name": null, 00:38:19.200 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:19.200 "is_configured": false, 00:38:19.200 "data_offset": 0, 00:38:19.200 "data_size": 63488 00:38:19.200 }, 00:38:19.200 { 00:38:19.200 "name": "BaseBdev3", 00:38:19.200 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:19.200 "is_configured": true, 00:38:19.200 "data_offset": 2048, 00:38:19.200 "data_size": 63488 00:38:19.200 }, 00:38:19.200 { 00:38:19.200 "name": "BaseBdev4", 00:38:19.200 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:19.200 "is_configured": true, 00:38:19.200 "data_offset": 2048, 00:38:19.200 "data_size": 63488 00:38:19.200 } 00:38:19.200 ] 00:38:19.200 }' 00:38:19.200 05:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:19.200 05:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:19.457 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:19.457 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.458 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:19.458 [2024-12-09 05:29:06.429114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:19.716 "name": "Existed_Raid", 00:38:19.716 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:19.716 "strip_size_kb": 64, 00:38:19.716 "state": "configuring", 00:38:19.716 "raid_level": "raid0", 00:38:19.716 "superblock": true, 00:38:19.716 "num_base_bdevs": 4, 00:38:19.716 "num_base_bdevs_discovered": 2, 00:38:19.716 "num_base_bdevs_operational": 4, 00:38:19.716 "base_bdevs_list": [ 00:38:19.716 { 00:38:19.716 "name": "BaseBdev1", 00:38:19.716 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:19.716 "is_configured": true, 00:38:19.716 "data_offset": 2048, 00:38:19.716 "data_size": 63488 00:38:19.716 }, 00:38:19.716 { 00:38:19.716 "name": null, 00:38:19.716 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:19.716 "is_configured": false, 00:38:19.716 "data_offset": 0, 00:38:19.716 "data_size": 63488 00:38:19.716 }, 00:38:19.716 { 00:38:19.716 "name": null, 00:38:19.716 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:19.716 "is_configured": false, 00:38:19.716 "data_offset": 0, 00:38:19.716 "data_size": 63488 00:38:19.716 }, 00:38:19.716 { 00:38:19.716 "name": "BaseBdev4", 00:38:19.716 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:19.716 "is_configured": true, 00:38:19.716 "data_offset": 2048, 00:38:19.716 "data_size": 63488 00:38:19.716 } 00:38:19.716 ] 00:38:19.716 }' 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:19.716 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.282 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:38:20.282 05:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.282 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.282 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.282 05:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.282 [2024-12-09 05:29:07.009258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:20.282 "name": "Existed_Raid", 00:38:20.282 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:20.282 "strip_size_kb": 64, 00:38:20.282 "state": "configuring", 00:38:20.282 "raid_level": "raid0", 00:38:20.282 "superblock": true, 00:38:20.282 "num_base_bdevs": 4, 00:38:20.282 "num_base_bdevs_discovered": 3, 00:38:20.282 "num_base_bdevs_operational": 4, 00:38:20.282 "base_bdevs_list": [ 00:38:20.282 { 00:38:20.282 "name": "BaseBdev1", 00:38:20.282 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:20.282 "is_configured": true, 00:38:20.282 "data_offset": 2048, 00:38:20.282 "data_size": 63488 00:38:20.282 }, 00:38:20.282 { 00:38:20.282 "name": null, 00:38:20.282 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:20.282 "is_configured": false, 00:38:20.282 "data_offset": 0, 00:38:20.282 "data_size": 63488 00:38:20.282 }, 00:38:20.282 { 00:38:20.282 "name": "BaseBdev3", 00:38:20.282 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:20.282 "is_configured": true, 00:38:20.282 "data_offset": 2048, 00:38:20.282 "data_size": 63488 00:38:20.282 }, 00:38:20.282 { 00:38:20.282 "name": "BaseBdev4", 00:38:20.282 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:20.282 "is_configured": true, 00:38:20.282 "data_offset": 2048, 00:38:20.282 "data_size": 63488 00:38:20.282 } 00:38:20.282 ] 00:38:20.282 }' 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:20.282 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.889 [2024-12-09 05:29:07.601643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:20.889 "name": "Existed_Raid", 00:38:20.889 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:20.889 "strip_size_kb": 64, 00:38:20.889 "state": "configuring", 00:38:20.889 "raid_level": "raid0", 00:38:20.889 "superblock": true, 00:38:20.889 "num_base_bdevs": 4, 00:38:20.889 "num_base_bdevs_discovered": 2, 00:38:20.889 "num_base_bdevs_operational": 4, 00:38:20.889 "base_bdevs_list": [ 00:38:20.889 { 00:38:20.889 "name": null, 00:38:20.889 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:20.889 "is_configured": false, 00:38:20.889 "data_offset": 0, 00:38:20.889 "data_size": 63488 00:38:20.889 }, 00:38:20.889 { 00:38:20.889 "name": null, 00:38:20.889 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:20.889 "is_configured": false, 00:38:20.889 "data_offset": 0, 00:38:20.889 "data_size": 63488 00:38:20.889 }, 00:38:20.889 { 00:38:20.889 "name": "BaseBdev3", 00:38:20.889 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:20.889 "is_configured": true, 00:38:20.889 "data_offset": 2048, 00:38:20.889 "data_size": 63488 00:38:20.889 }, 00:38:20.889 { 00:38:20.889 "name": "BaseBdev4", 00:38:20.889 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:20.889 "is_configured": true, 00:38:20.889 "data_offset": 2048, 00:38:20.889 "data_size": 63488 00:38:20.889 } 00:38:20.889 ] 00:38:20.889 }' 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:20.889 05:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.456 [2024-12-09 05:29:08.256692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:21.456 "name": "Existed_Raid", 00:38:21.456 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:21.456 "strip_size_kb": 64, 00:38:21.456 "state": "configuring", 00:38:21.456 "raid_level": "raid0", 00:38:21.456 "superblock": true, 00:38:21.456 "num_base_bdevs": 4, 00:38:21.456 "num_base_bdevs_discovered": 3, 00:38:21.456 "num_base_bdevs_operational": 4, 00:38:21.456 "base_bdevs_list": [ 00:38:21.456 { 00:38:21.456 "name": null, 00:38:21.456 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:21.456 "is_configured": false, 00:38:21.456 "data_offset": 0, 00:38:21.456 "data_size": 63488 00:38:21.456 }, 00:38:21.456 { 00:38:21.456 "name": "BaseBdev2", 00:38:21.456 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:21.456 "is_configured": true, 00:38:21.456 "data_offset": 2048, 00:38:21.456 "data_size": 63488 00:38:21.456 }, 00:38:21.456 { 00:38:21.456 "name": "BaseBdev3", 00:38:21.456 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:21.456 "is_configured": true, 00:38:21.456 "data_offset": 2048, 00:38:21.456 "data_size": 63488 00:38:21.456 }, 00:38:21.456 { 00:38:21.456 "name": "BaseBdev4", 00:38:21.456 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:21.456 "is_configured": true, 00:38:21.456 "data_offset": 2048, 00:38:21.456 "data_size": 63488 00:38:21.456 } 00:38:21.456 ] 00:38:21.456 }' 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:21.456 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b2f693e-512a-4999-9790-eeae5182ed69 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.024 [2024-12-09 05:29:08.918865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:38:22.024 [2024-12-09 05:29:08.919176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:38:22.024 [2024-12-09 05:29:08.919208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:22.024 NewBaseBdev 00:38:22.024 [2024-12-09 05:29:08.919566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:38:22.024 [2024-12-09 05:29:08.919754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:38:22.024 [2024-12-09 05:29:08.919794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:38:22.024 [2024-12-09 05:29:08.919948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.024 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.024 [ 00:38:22.024 { 00:38:22.024 "name": "NewBaseBdev", 00:38:22.024 "aliases": [ 00:38:22.024 "3b2f693e-512a-4999-9790-eeae5182ed69" 00:38:22.024 ], 00:38:22.024 "product_name": "Malloc disk", 00:38:22.024 "block_size": 512, 00:38:22.024 "num_blocks": 65536, 00:38:22.024 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:22.024 "assigned_rate_limits": { 00:38:22.024 "rw_ios_per_sec": 0, 00:38:22.024 "rw_mbytes_per_sec": 0, 00:38:22.024 "r_mbytes_per_sec": 0, 00:38:22.024 "w_mbytes_per_sec": 0 00:38:22.024 }, 00:38:22.024 "claimed": true, 00:38:22.024 "claim_type": "exclusive_write", 00:38:22.024 "zoned": false, 00:38:22.024 "supported_io_types": { 00:38:22.024 "read": true, 00:38:22.024 "write": true, 00:38:22.024 "unmap": true, 00:38:22.024 "flush": true, 00:38:22.024 "reset": true, 00:38:22.024 "nvme_admin": false, 00:38:22.024 "nvme_io": false, 00:38:22.024 "nvme_io_md": false, 00:38:22.024 "write_zeroes": true, 00:38:22.024 "zcopy": true, 00:38:22.024 "get_zone_info": false, 00:38:22.024 "zone_management": false, 00:38:22.024 "zone_append": false, 00:38:22.024 "compare": false, 00:38:22.024 "compare_and_write": false, 00:38:22.024 "abort": true, 00:38:22.024 "seek_hole": false, 00:38:22.025 "seek_data": false, 00:38:22.025 "copy": true, 00:38:22.025 "nvme_iov_md": false 00:38:22.025 }, 00:38:22.025 "memory_domains": [ 00:38:22.025 { 00:38:22.025 "dma_device_id": "system", 00:38:22.025 "dma_device_type": 1 00:38:22.025 }, 00:38:22.025 { 00:38:22.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:22.025 "dma_device_type": 2 00:38:22.025 } 00:38:22.025 ], 00:38:22.025 "driver_specific": {} 00:38:22.025 } 00:38:22.025 ] 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.025 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.282 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:22.282 "name": "Existed_Raid", 00:38:22.282 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:22.282 "strip_size_kb": 64, 00:38:22.282 "state": "online", 00:38:22.282 "raid_level": "raid0", 00:38:22.282 "superblock": true, 00:38:22.282 "num_base_bdevs": 4, 00:38:22.282 "num_base_bdevs_discovered": 4, 00:38:22.282 "num_base_bdevs_operational": 4, 00:38:22.282 "base_bdevs_list": [ 00:38:22.282 { 00:38:22.282 "name": "NewBaseBdev", 00:38:22.282 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:22.282 "is_configured": true, 00:38:22.282 "data_offset": 2048, 00:38:22.282 "data_size": 63488 00:38:22.282 }, 00:38:22.282 { 00:38:22.282 "name": "BaseBdev2", 00:38:22.282 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:22.282 "is_configured": true, 00:38:22.282 "data_offset": 2048, 00:38:22.282 "data_size": 63488 00:38:22.282 }, 00:38:22.282 { 00:38:22.282 "name": "BaseBdev3", 00:38:22.282 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:22.282 "is_configured": true, 00:38:22.282 "data_offset": 2048, 00:38:22.282 "data_size": 63488 00:38:22.282 }, 00:38:22.282 { 00:38:22.282 "name": "BaseBdev4", 00:38:22.282 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:22.282 "is_configured": true, 00:38:22.282 "data_offset": 2048, 00:38:22.282 "data_size": 63488 00:38:22.282 } 00:38:22.282 ] 00:38:22.282 }' 00:38:22.282 05:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:22.282 05:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.539 [2024-12-09 05:29:09.483566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:22.539 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:22.797 "name": "Existed_Raid", 00:38:22.797 "aliases": [ 00:38:22.797 "7ae0ce1d-f49c-457f-9867-9fda13c66845" 00:38:22.797 ], 00:38:22.797 "product_name": "Raid Volume", 00:38:22.797 "block_size": 512, 00:38:22.797 "num_blocks": 253952, 00:38:22.797 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:22.797 "assigned_rate_limits": { 00:38:22.797 "rw_ios_per_sec": 0, 00:38:22.797 "rw_mbytes_per_sec": 0, 00:38:22.797 "r_mbytes_per_sec": 0, 00:38:22.797 "w_mbytes_per_sec": 0 00:38:22.797 }, 00:38:22.797 "claimed": false, 00:38:22.797 "zoned": false, 00:38:22.797 "supported_io_types": { 00:38:22.797 "read": true, 00:38:22.797 "write": true, 00:38:22.797 "unmap": true, 00:38:22.797 "flush": true, 00:38:22.797 "reset": true, 00:38:22.797 "nvme_admin": false, 00:38:22.797 "nvme_io": false, 00:38:22.797 "nvme_io_md": false, 00:38:22.797 "write_zeroes": true, 00:38:22.797 "zcopy": false, 00:38:22.797 "get_zone_info": false, 00:38:22.797 "zone_management": false, 00:38:22.797 "zone_append": false, 00:38:22.797 "compare": false, 00:38:22.797 "compare_and_write": false, 00:38:22.797 "abort": false, 00:38:22.797 "seek_hole": false, 00:38:22.797 "seek_data": false, 00:38:22.797 "copy": false, 00:38:22.797 "nvme_iov_md": false 00:38:22.797 }, 00:38:22.797 "memory_domains": [ 00:38:22.797 { 00:38:22.797 "dma_device_id": "system", 00:38:22.797 "dma_device_type": 1 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:22.797 "dma_device_type": 2 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "system", 00:38:22.797 "dma_device_type": 1 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:22.797 "dma_device_type": 2 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "system", 00:38:22.797 "dma_device_type": 1 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:22.797 "dma_device_type": 2 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "system", 00:38:22.797 "dma_device_type": 1 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:22.797 "dma_device_type": 2 00:38:22.797 } 00:38:22.797 ], 00:38:22.797 "driver_specific": { 00:38:22.797 "raid": { 00:38:22.797 "uuid": "7ae0ce1d-f49c-457f-9867-9fda13c66845", 00:38:22.797 "strip_size_kb": 64, 00:38:22.797 "state": "online", 00:38:22.797 "raid_level": "raid0", 00:38:22.797 "superblock": true, 00:38:22.797 "num_base_bdevs": 4, 00:38:22.797 "num_base_bdevs_discovered": 4, 00:38:22.797 "num_base_bdevs_operational": 4, 00:38:22.797 "base_bdevs_list": [ 00:38:22.797 { 00:38:22.797 "name": "NewBaseBdev", 00:38:22.797 "uuid": "3b2f693e-512a-4999-9790-eeae5182ed69", 00:38:22.797 "is_configured": true, 00:38:22.797 "data_offset": 2048, 00:38:22.797 "data_size": 63488 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "name": "BaseBdev2", 00:38:22.797 "uuid": "1d40b49b-2251-423d-a5ae-3ccd89248fa1", 00:38:22.797 "is_configured": true, 00:38:22.797 "data_offset": 2048, 00:38:22.797 "data_size": 63488 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "name": "BaseBdev3", 00:38:22.797 "uuid": "06327011-7d65-41b9-98ac-a02f4228b471", 00:38:22.797 "is_configured": true, 00:38:22.797 "data_offset": 2048, 00:38:22.797 "data_size": 63488 00:38:22.797 }, 00:38:22.797 { 00:38:22.797 "name": "BaseBdev4", 00:38:22.797 "uuid": "47d3dc20-ef14-481c-b3bc-10cd5dcefab1", 00:38:22.797 "is_configured": true, 00:38:22.797 "data_offset": 2048, 00:38:22.797 "data_size": 63488 00:38:22.797 } 00:38:22.797 ] 00:38:22.797 } 00:38:22.797 } 00:38:22.797 }' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:38:22.797 BaseBdev2 00:38:22.797 BaseBdev3 00:38:22.797 BaseBdev4' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:22.797 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.798 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:22.798 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:23.055 [2024-12-09 05:29:09.851258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:23.055 [2024-12-09 05:29:09.851325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:23.055 [2024-12-09 05:29:09.851415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:23.055 [2024-12-09 05:29:09.851522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:23.055 [2024-12-09 05:29:09.851569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70206 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70206 ']' 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70206 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70206 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:23.055 killing process with pid 70206 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70206' 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70206 00:38:23.055 [2024-12-09 05:29:09.889940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:23.055 05:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70206 00:38:23.313 [2024-12-09 05:29:10.233957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:24.688 05:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:38:24.688 00:38:24.688 real 0m13.030s 00:38:24.688 user 0m21.493s 00:38:24.688 sys 0m1.917s 00:38:24.688 05:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.688 ************************************ 00:38:24.688 END TEST raid_state_function_test_sb 00:38:24.688 ************************************ 00:38:24.688 05:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:24.688 05:29:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:38:24.688 05:29:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:24.688 05:29:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:24.688 05:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:24.688 ************************************ 00:38:24.688 START TEST raid_superblock_test 00:38:24.688 ************************************ 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70890 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70890 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70890 ']' 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.689 05:29:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:24.689 [2024-12-09 05:29:11.591814] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:24.689 [2024-12-09 05:29:11.591983] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70890 ] 00:38:24.948 [2024-12-09 05:29:11.783090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.207 [2024-12-09 05:29:11.919176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.207 [2024-12-09 05:29:12.141714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:25.207 [2024-12-09 05:29:12.141754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:25.775 malloc1 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:25.775 [2024-12-09 05:29:12.637504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:25.775 [2024-12-09 05:29:12.637585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:25.775 [2024-12-09 05:29:12.637616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:25.775 [2024-12-09 05:29:12.637646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:25.775 [2024-12-09 05:29:12.640676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:25.775 [2024-12-09 05:29:12.640729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:25.775 pt1 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:38:25.775 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:25.776 malloc2 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:25.776 [2024-12-09 05:29:12.695440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:25.776 [2024-12-09 05:29:12.695495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:25.776 [2024-12-09 05:29:12.695529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:25.776 [2024-12-09 05:29:12.695543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:25.776 [2024-12-09 05:29:12.698377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:25.776 [2024-12-09 05:29:12.698413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:25.776 pt2 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.776 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.036 malloc3 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.036 [2024-12-09 05:29:12.762935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:26.036 [2024-12-09 05:29:12.762989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:26.036 [2024-12-09 05:29:12.763022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:26.036 [2024-12-09 05:29:12.763037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:26.036 [2024-12-09 05:29:12.766052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:26.036 [2024-12-09 05:29:12.766110] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:26.036 pt3 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.036 malloc4 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.036 [2024-12-09 05:29:12.818049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:26.036 [2024-12-09 05:29:12.818126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:26.036 [2024-12-09 05:29:12.818168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:26.036 [2024-12-09 05:29:12.818207] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:26.036 [2024-12-09 05:29:12.821043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:26.036 [2024-12-09 05:29:12.821096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:26.036 pt4 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.036 [2024-12-09 05:29:12.826112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:26.036 [2024-12-09 05:29:12.828654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:26.036 [2024-12-09 05:29:12.828815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:26.036 [2024-12-09 05:29:12.828886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:26.036 [2024-12-09 05:29:12.829190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:26.036 [2024-12-09 05:29:12.829217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:26.036 [2024-12-09 05:29:12.829528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:26.036 [2024-12-09 05:29:12.829806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:26.036 [2024-12-09 05:29:12.829871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:26.036 [2024-12-09 05:29:12.830128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:26.036 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:26.037 "name": "raid_bdev1", 00:38:26.037 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:26.037 "strip_size_kb": 64, 00:38:26.037 "state": "online", 00:38:26.037 "raid_level": "raid0", 00:38:26.037 "superblock": true, 00:38:26.037 "num_base_bdevs": 4, 00:38:26.037 "num_base_bdevs_discovered": 4, 00:38:26.037 "num_base_bdevs_operational": 4, 00:38:26.037 "base_bdevs_list": [ 00:38:26.037 { 00:38:26.037 "name": "pt1", 00:38:26.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:26.037 "is_configured": true, 00:38:26.037 "data_offset": 2048, 00:38:26.037 "data_size": 63488 00:38:26.037 }, 00:38:26.037 { 00:38:26.037 "name": "pt2", 00:38:26.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:26.037 "is_configured": true, 00:38:26.037 "data_offset": 2048, 00:38:26.037 "data_size": 63488 00:38:26.037 }, 00:38:26.037 { 00:38:26.037 "name": "pt3", 00:38:26.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:26.037 "is_configured": true, 00:38:26.037 "data_offset": 2048, 00:38:26.037 "data_size": 63488 00:38:26.037 }, 00:38:26.037 { 00:38:26.037 "name": "pt4", 00:38:26.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:26.037 "is_configured": true, 00:38:26.037 "data_offset": 2048, 00:38:26.037 "data_size": 63488 00:38:26.037 } 00:38:26.037 ] 00:38:26.037 }' 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:26.037 05:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.604 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.605 [2024-12-09 05:29:13.362759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:26.605 "name": "raid_bdev1", 00:38:26.605 "aliases": [ 00:38:26.605 "660bbdaf-596f-4954-9ab2-760d9dadcf64" 00:38:26.605 ], 00:38:26.605 "product_name": "Raid Volume", 00:38:26.605 "block_size": 512, 00:38:26.605 "num_blocks": 253952, 00:38:26.605 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:26.605 "assigned_rate_limits": { 00:38:26.605 "rw_ios_per_sec": 0, 00:38:26.605 "rw_mbytes_per_sec": 0, 00:38:26.605 "r_mbytes_per_sec": 0, 00:38:26.605 "w_mbytes_per_sec": 0 00:38:26.605 }, 00:38:26.605 "claimed": false, 00:38:26.605 "zoned": false, 00:38:26.605 "supported_io_types": { 00:38:26.605 "read": true, 00:38:26.605 "write": true, 00:38:26.605 "unmap": true, 00:38:26.605 "flush": true, 00:38:26.605 "reset": true, 00:38:26.605 "nvme_admin": false, 00:38:26.605 "nvme_io": false, 00:38:26.605 "nvme_io_md": false, 00:38:26.605 "write_zeroes": true, 00:38:26.605 "zcopy": false, 00:38:26.605 "get_zone_info": false, 00:38:26.605 "zone_management": false, 00:38:26.605 "zone_append": false, 00:38:26.605 "compare": false, 00:38:26.605 "compare_and_write": false, 00:38:26.605 "abort": false, 00:38:26.605 "seek_hole": false, 00:38:26.605 "seek_data": false, 00:38:26.605 "copy": false, 00:38:26.605 "nvme_iov_md": false 00:38:26.605 }, 00:38:26.605 "memory_domains": [ 00:38:26.605 { 00:38:26.605 "dma_device_id": "system", 00:38:26.605 "dma_device_type": 1 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.605 "dma_device_type": 2 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "system", 00:38:26.605 "dma_device_type": 1 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.605 "dma_device_type": 2 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "system", 00:38:26.605 "dma_device_type": 1 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.605 "dma_device_type": 2 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "system", 00:38:26.605 "dma_device_type": 1 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.605 "dma_device_type": 2 00:38:26.605 } 00:38:26.605 ], 00:38:26.605 "driver_specific": { 00:38:26.605 "raid": { 00:38:26.605 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:26.605 "strip_size_kb": 64, 00:38:26.605 "state": "online", 00:38:26.605 "raid_level": "raid0", 00:38:26.605 "superblock": true, 00:38:26.605 "num_base_bdevs": 4, 00:38:26.605 "num_base_bdevs_discovered": 4, 00:38:26.605 "num_base_bdevs_operational": 4, 00:38:26.605 "base_bdevs_list": [ 00:38:26.605 { 00:38:26.605 "name": "pt1", 00:38:26.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:26.605 "is_configured": true, 00:38:26.605 "data_offset": 2048, 00:38:26.605 "data_size": 63488 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "name": "pt2", 00:38:26.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:26.605 "is_configured": true, 00:38:26.605 "data_offset": 2048, 00:38:26.605 "data_size": 63488 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "name": "pt3", 00:38:26.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:26.605 "is_configured": true, 00:38:26.605 "data_offset": 2048, 00:38:26.605 "data_size": 63488 00:38:26.605 }, 00:38:26.605 { 00:38:26.605 "name": "pt4", 00:38:26.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:26.605 "is_configured": true, 00:38:26.605 "data_offset": 2048, 00:38:26.605 "data_size": 63488 00:38:26.605 } 00:38:26.605 ] 00:38:26.605 } 00:38:26.605 } 00:38:26.605 }' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:26.605 pt2 00:38:26.605 pt3 00:38:26.605 pt4' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.605 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.864 [2024-12-09 05:29:13.734872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=660bbdaf-596f-4954-9ab2-760d9dadcf64 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 660bbdaf-596f-4954-9ab2-760d9dadcf64 ']' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.864 [2024-12-09 05:29:13.786437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:26.864 [2024-12-09 05:29:13.786465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:26.864 [2024-12-09 05:29:13.786586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:26.864 [2024-12-09 05:29:13.786692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:26.864 [2024-12-09 05:29:13.786716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.864 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.123 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.124 [2024-12-09 05:29:13.942549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:27.124 [2024-12-09 05:29:13.945208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:27.124 [2024-12-09 05:29:13.945278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:38:27.124 [2024-12-09 05:29:13.945328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:38:27.124 [2024-12-09 05:29:13.945396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:27.124 [2024-12-09 05:29:13.945485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:27.124 [2024-12-09 05:29:13.945515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:38:27.124 [2024-12-09 05:29:13.945544] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:38:27.124 [2024-12-09 05:29:13.945564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:27.124 [2024-12-09 05:29:13.945582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:38:27.124 request: 00:38:27.124 { 00:38:27.124 "name": "raid_bdev1", 00:38:27.124 "raid_level": "raid0", 00:38:27.124 "base_bdevs": [ 00:38:27.124 "malloc1", 00:38:27.124 "malloc2", 00:38:27.124 "malloc3", 00:38:27.124 "malloc4" 00:38:27.124 ], 00:38:27.124 "strip_size_kb": 64, 00:38:27.124 "superblock": false, 00:38:27.124 "method": "bdev_raid_create", 00:38:27.124 "req_id": 1 00:38:27.124 } 00:38:27.124 Got JSON-RPC error response 00:38:27.124 response: 00:38:27.124 { 00:38:27.124 "code": -17, 00:38:27.124 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:27.124 } 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.124 05:29:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.124 [2024-12-09 05:29:14.006518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:27.124 [2024-12-09 05:29:14.006602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.124 [2024-12-09 05:29:14.006629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:27.124 [2024-12-09 05:29:14.006660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.124 [2024-12-09 05:29:14.009732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.124 [2024-12-09 05:29:14.009819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:27.124 [2024-12-09 05:29:14.009902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:27.124 [2024-12-09 05:29:14.009968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:27.124 pt1 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.124 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:27.124 "name": "raid_bdev1", 00:38:27.124 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:27.124 "strip_size_kb": 64, 00:38:27.124 "state": "configuring", 00:38:27.124 "raid_level": "raid0", 00:38:27.124 "superblock": true, 00:38:27.124 "num_base_bdevs": 4, 00:38:27.124 "num_base_bdevs_discovered": 1, 00:38:27.124 "num_base_bdevs_operational": 4, 00:38:27.124 "base_bdevs_list": [ 00:38:27.124 { 00:38:27.124 "name": "pt1", 00:38:27.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:27.124 "is_configured": true, 00:38:27.124 "data_offset": 2048, 00:38:27.124 "data_size": 63488 00:38:27.124 }, 00:38:27.124 { 00:38:27.124 "name": null, 00:38:27.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:27.124 "is_configured": false, 00:38:27.124 "data_offset": 2048, 00:38:27.124 "data_size": 63488 00:38:27.124 }, 00:38:27.124 { 00:38:27.124 "name": null, 00:38:27.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:27.125 "is_configured": false, 00:38:27.125 "data_offset": 2048, 00:38:27.125 "data_size": 63488 00:38:27.125 }, 00:38:27.125 { 00:38:27.125 "name": null, 00:38:27.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:27.125 "is_configured": false, 00:38:27.125 "data_offset": 2048, 00:38:27.125 "data_size": 63488 00:38:27.125 } 00:38:27.125 ] 00:38:27.125 }' 00:38:27.125 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:27.125 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.691 [2024-12-09 05:29:14.534702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:27.691 [2024-12-09 05:29:14.534815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.691 [2024-12-09 05:29:14.534844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:27.691 [2024-12-09 05:29:14.534862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.691 [2024-12-09 05:29:14.535381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.691 [2024-12-09 05:29:14.535414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:27.691 [2024-12-09 05:29:14.535548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:27.691 [2024-12-09 05:29:14.535582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:27.691 pt2 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.691 [2024-12-09 05:29:14.542712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:27.691 "name": "raid_bdev1", 00:38:27.691 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:27.691 "strip_size_kb": 64, 00:38:27.691 "state": "configuring", 00:38:27.691 "raid_level": "raid0", 00:38:27.691 "superblock": true, 00:38:27.691 "num_base_bdevs": 4, 00:38:27.691 "num_base_bdevs_discovered": 1, 00:38:27.691 "num_base_bdevs_operational": 4, 00:38:27.691 "base_bdevs_list": [ 00:38:27.691 { 00:38:27.691 "name": "pt1", 00:38:27.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:27.691 "is_configured": true, 00:38:27.691 "data_offset": 2048, 00:38:27.691 "data_size": 63488 00:38:27.691 }, 00:38:27.691 { 00:38:27.691 "name": null, 00:38:27.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:27.691 "is_configured": false, 00:38:27.691 "data_offset": 0, 00:38:27.691 "data_size": 63488 00:38:27.691 }, 00:38:27.691 { 00:38:27.691 "name": null, 00:38:27.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:27.691 "is_configured": false, 00:38:27.691 "data_offset": 2048, 00:38:27.691 "data_size": 63488 00:38:27.691 }, 00:38:27.691 { 00:38:27.691 "name": null, 00:38:27.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:27.691 "is_configured": false, 00:38:27.691 "data_offset": 2048, 00:38:27.691 "data_size": 63488 00:38:27.691 } 00:38:27.691 ] 00:38:27.691 }' 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:27.691 05:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.260 [2024-12-09 05:29:15.066960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:28.260 [2024-12-09 05:29:15.067046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:28.260 [2024-12-09 05:29:15.067079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:28.260 [2024-12-09 05:29:15.067095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:28.260 [2024-12-09 05:29:15.067639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:28.260 [2024-12-09 05:29:15.067663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:28.260 [2024-12-09 05:29:15.067756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:28.260 [2024-12-09 05:29:15.067817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:28.260 pt2 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.260 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.260 [2024-12-09 05:29:15.074950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:28.260 [2024-12-09 05:29:15.075004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:28.261 [2024-12-09 05:29:15.075031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:38:28.261 [2024-12-09 05:29:15.075045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:28.261 [2024-12-09 05:29:15.075531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:28.261 [2024-12-09 05:29:15.075560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:28.261 [2024-12-09 05:29:15.075636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:38:28.261 [2024-12-09 05:29:15.075670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:28.261 pt3 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.261 [2024-12-09 05:29:15.082899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:28.261 [2024-12-09 05:29:15.082961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:28.261 [2024-12-09 05:29:15.082987] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:28.261 [2024-12-09 05:29:15.083002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:28.261 [2024-12-09 05:29:15.083463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:28.261 [2024-12-09 05:29:15.083493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:28.261 [2024-12-09 05:29:15.083570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:38:28.261 [2024-12-09 05:29:15.083600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:28.261 [2024-12-09 05:29:15.083801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:28.261 [2024-12-09 05:29:15.083818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:28.261 [2024-12-09 05:29:15.084132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:28.261 [2024-12-09 05:29:15.084387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:28.261 [2024-12-09 05:29:15.084418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:38:28.261 [2024-12-09 05:29:15.084575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:28.261 pt4 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:28.261 "name": "raid_bdev1", 00:38:28.261 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:28.261 "strip_size_kb": 64, 00:38:28.261 "state": "online", 00:38:28.261 "raid_level": "raid0", 00:38:28.261 "superblock": true, 00:38:28.261 "num_base_bdevs": 4, 00:38:28.261 "num_base_bdevs_discovered": 4, 00:38:28.261 "num_base_bdevs_operational": 4, 00:38:28.261 "base_bdevs_list": [ 00:38:28.261 { 00:38:28.261 "name": "pt1", 00:38:28.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:28.261 "is_configured": true, 00:38:28.261 "data_offset": 2048, 00:38:28.261 "data_size": 63488 00:38:28.261 }, 00:38:28.261 { 00:38:28.261 "name": "pt2", 00:38:28.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:28.261 "is_configured": true, 00:38:28.261 "data_offset": 2048, 00:38:28.261 "data_size": 63488 00:38:28.261 }, 00:38:28.261 { 00:38:28.261 "name": "pt3", 00:38:28.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:28.261 "is_configured": true, 00:38:28.261 "data_offset": 2048, 00:38:28.261 "data_size": 63488 00:38:28.261 }, 00:38:28.261 { 00:38:28.261 "name": "pt4", 00:38:28.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:28.261 "is_configured": true, 00:38:28.261 "data_offset": 2048, 00:38:28.261 "data_size": 63488 00:38:28.261 } 00:38:28.261 ] 00:38:28.261 }' 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:28.261 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.829 [2024-12-09 05:29:15.607594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.829 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:28.829 "name": "raid_bdev1", 00:38:28.829 "aliases": [ 00:38:28.829 "660bbdaf-596f-4954-9ab2-760d9dadcf64" 00:38:28.829 ], 00:38:28.829 "product_name": "Raid Volume", 00:38:28.829 "block_size": 512, 00:38:28.829 "num_blocks": 253952, 00:38:28.829 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:28.829 "assigned_rate_limits": { 00:38:28.829 "rw_ios_per_sec": 0, 00:38:28.829 "rw_mbytes_per_sec": 0, 00:38:28.829 "r_mbytes_per_sec": 0, 00:38:28.829 "w_mbytes_per_sec": 0 00:38:28.829 }, 00:38:28.829 "claimed": false, 00:38:28.829 "zoned": false, 00:38:28.829 "supported_io_types": { 00:38:28.829 "read": true, 00:38:28.829 "write": true, 00:38:28.829 "unmap": true, 00:38:28.829 "flush": true, 00:38:28.829 "reset": true, 00:38:28.829 "nvme_admin": false, 00:38:28.829 "nvme_io": false, 00:38:28.829 "nvme_io_md": false, 00:38:28.829 "write_zeroes": true, 00:38:28.829 "zcopy": false, 00:38:28.829 "get_zone_info": false, 00:38:28.829 "zone_management": false, 00:38:28.829 "zone_append": false, 00:38:28.829 "compare": false, 00:38:28.829 "compare_and_write": false, 00:38:28.829 "abort": false, 00:38:28.829 "seek_hole": false, 00:38:28.829 "seek_data": false, 00:38:28.829 "copy": false, 00:38:28.829 "nvme_iov_md": false 00:38:28.829 }, 00:38:28.829 "memory_domains": [ 00:38:28.829 { 00:38:28.829 "dma_device_id": "system", 00:38:28.829 "dma_device_type": 1 00:38:28.829 }, 00:38:28.829 { 00:38:28.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:28.829 "dma_device_type": 2 00:38:28.829 }, 00:38:28.829 { 00:38:28.829 "dma_device_id": "system", 00:38:28.829 "dma_device_type": 1 00:38:28.829 }, 00:38:28.829 { 00:38:28.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:28.829 "dma_device_type": 2 00:38:28.829 }, 00:38:28.829 { 00:38:28.829 "dma_device_id": "system", 00:38:28.829 "dma_device_type": 1 00:38:28.829 }, 00:38:28.829 { 00:38:28.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:28.829 "dma_device_type": 2 00:38:28.829 }, 00:38:28.829 { 00:38:28.829 "dma_device_id": "system", 00:38:28.830 "dma_device_type": 1 00:38:28.830 }, 00:38:28.830 { 00:38:28.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:28.830 "dma_device_type": 2 00:38:28.830 } 00:38:28.830 ], 00:38:28.830 "driver_specific": { 00:38:28.830 "raid": { 00:38:28.830 "uuid": "660bbdaf-596f-4954-9ab2-760d9dadcf64", 00:38:28.830 "strip_size_kb": 64, 00:38:28.830 "state": "online", 00:38:28.830 "raid_level": "raid0", 00:38:28.830 "superblock": true, 00:38:28.830 "num_base_bdevs": 4, 00:38:28.830 "num_base_bdevs_discovered": 4, 00:38:28.830 "num_base_bdevs_operational": 4, 00:38:28.830 "base_bdevs_list": [ 00:38:28.830 { 00:38:28.830 "name": "pt1", 00:38:28.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:28.830 "is_configured": true, 00:38:28.830 "data_offset": 2048, 00:38:28.830 "data_size": 63488 00:38:28.830 }, 00:38:28.830 { 00:38:28.830 "name": "pt2", 00:38:28.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:28.830 "is_configured": true, 00:38:28.830 "data_offset": 2048, 00:38:28.830 "data_size": 63488 00:38:28.830 }, 00:38:28.830 { 00:38:28.830 "name": "pt3", 00:38:28.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:28.830 "is_configured": true, 00:38:28.830 "data_offset": 2048, 00:38:28.830 "data_size": 63488 00:38:28.830 }, 00:38:28.830 { 00:38:28.830 "name": "pt4", 00:38:28.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:28.830 "is_configured": true, 00:38:28.830 "data_offset": 2048, 00:38:28.830 "data_size": 63488 00:38:28.830 } 00:38:28.830 ] 00:38:28.830 } 00:38:28.830 } 00:38:28.830 }' 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:28.830 pt2 00:38:28.830 pt3 00:38:28.830 pt4' 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.830 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:29.088 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:38:29.089 [2024-12-09 05:29:15.971525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:29.089 05:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 660bbdaf-596f-4954-9ab2-760d9dadcf64 '!=' 660bbdaf-596f-4954-9ab2-760d9dadcf64 ']' 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70890 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70890 ']' 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70890 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70890 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70890' 00:38:29.089 killing process with pid 70890 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70890 00:38:29.089 [2024-12-09 05:29:16.049923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:29.089 05:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70890 00:38:29.089 [2024-12-09 05:29:16.050031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:29.089 [2024-12-09 05:29:16.050133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:29.089 [2024-12-09 05:29:16.050158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:38:29.656 [2024-12-09 05:29:16.395644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:31.032 05:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:38:31.032 00:38:31.032 real 0m6.105s 00:38:31.032 user 0m9.051s 00:38:31.032 sys 0m0.966s 00:38:31.032 05:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.032 05:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:31.032 ************************************ 00:38:31.032 END TEST raid_superblock_test 00:38:31.032 ************************************ 00:38:31.032 05:29:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:38:31.032 05:29:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:31.032 05:29:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.032 05:29:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:31.032 ************************************ 00:38:31.032 START TEST raid_read_error_test 00:38:31.032 ************************************ 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5pDJJaQYnw 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71163 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71163 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71163 ']' 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:31.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:31.032 05:29:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:31.032 [2024-12-09 05:29:17.767663] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:31.032 [2024-12-09 05:29:17.767886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71163 ] 00:38:31.032 [2024-12-09 05:29:17.955944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.291 [2024-12-09 05:29:18.096673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.549 [2024-12-09 05:29:18.324256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:31.549 [2024-12-09 05:29:18.324295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:31.808 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:31.808 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:38:31.808 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:31.808 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:31.808 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.808 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 BaseBdev1_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 true 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 [2024-12-09 05:29:18.815607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:38:32.067 [2024-12-09 05:29:18.815705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:32.067 [2024-12-09 05:29:18.815733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:38:32.067 [2024-12-09 05:29:18.815750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:32.067 [2024-12-09 05:29:18.818851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:32.067 [2024-12-09 05:29:18.818912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:32.067 BaseBdev1 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 BaseBdev2_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 true 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 [2024-12-09 05:29:18.880650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:38:32.067 [2024-12-09 05:29:18.880893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:32.067 [2024-12-09 05:29:18.880963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:38:32.067 [2024-12-09 05:29:18.881087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:32.067 [2024-12-09 05:29:18.884392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:32.067 [2024-12-09 05:29:18.884453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:32.067 BaseBdev2 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 BaseBdev3_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 true 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 [2024-12-09 05:29:18.957403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:38:32.067 [2024-12-09 05:29:18.957693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:32.067 [2024-12-09 05:29:18.957762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:32.067 [2024-12-09 05:29:18.957818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:32.067 [2024-12-09 05:29:18.960722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:32.067 [2024-12-09 05:29:18.960810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:32.067 BaseBdev3 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 BaseBdev4_malloc 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 true 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 [2024-12-09 05:29:19.023581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:38:32.067 [2024-12-09 05:29:19.023846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:32.067 [2024-12-09 05:29:19.023917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:32.067 [2024-12-09 05:29:19.023943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:32.067 [2024-12-09 05:29:19.027332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:32.067 [2024-12-09 05:29:19.027426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:32.067 BaseBdev4 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.067 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.067 [2024-12-09 05:29:19.031709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:32.067 [2024-12-09 05:29:19.035014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:32.067 [2024-12-09 05:29:19.035261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:32.067 [2024-12-09 05:29:19.035410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:32.068 [2024-12-09 05:29:19.035830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:38:32.068 [2024-12-09 05:29:19.035999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:32.068 [2024-12-09 05:29:19.036331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:38:32.068 [2024-12-09 05:29:19.036556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:38:32.068 [2024-12-09 05:29:19.036575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:38:32.068 [2024-12-09 05:29:19.036791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:32.326 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:32.327 "name": "raid_bdev1", 00:38:32.327 "uuid": "62ccd847-b4fc-4f5a-996f-58c3aab6ff63", 00:38:32.327 "strip_size_kb": 64, 00:38:32.327 "state": "online", 00:38:32.327 "raid_level": "raid0", 00:38:32.327 "superblock": true, 00:38:32.327 "num_base_bdevs": 4, 00:38:32.327 "num_base_bdevs_discovered": 4, 00:38:32.327 "num_base_bdevs_operational": 4, 00:38:32.327 "base_bdevs_list": [ 00:38:32.327 { 00:38:32.327 "name": "BaseBdev1", 00:38:32.327 "uuid": "5ed08958-a4e4-5e9e-b3d0-f5759da52848", 00:38:32.327 "is_configured": true, 00:38:32.327 "data_offset": 2048, 00:38:32.327 "data_size": 63488 00:38:32.327 }, 00:38:32.327 { 00:38:32.327 "name": "BaseBdev2", 00:38:32.327 "uuid": "04e14a50-1c5a-5242-9552-1ecef6ea9499", 00:38:32.327 "is_configured": true, 00:38:32.327 "data_offset": 2048, 00:38:32.327 "data_size": 63488 00:38:32.327 }, 00:38:32.327 { 00:38:32.327 "name": "BaseBdev3", 00:38:32.327 "uuid": "26b70ee5-01a0-5b27-9d2c-986418714616", 00:38:32.327 "is_configured": true, 00:38:32.327 "data_offset": 2048, 00:38:32.327 "data_size": 63488 00:38:32.327 }, 00:38:32.327 { 00:38:32.327 "name": "BaseBdev4", 00:38:32.327 "uuid": "3d1757fe-5b82-56ca-8d94-7d3890ee4405", 00:38:32.327 "is_configured": true, 00:38:32.327 "data_offset": 2048, 00:38:32.327 "data_size": 63488 00:38:32.327 } 00:38:32.327 ] 00:38:32.327 }' 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:32.327 05:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:32.894 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:38:32.894 05:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:32.894 [2024-12-09 05:29:19.697449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:33.828 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:33.828 "name": "raid_bdev1", 00:38:33.828 "uuid": "62ccd847-b4fc-4f5a-996f-58c3aab6ff63", 00:38:33.828 "strip_size_kb": 64, 00:38:33.828 "state": "online", 00:38:33.828 "raid_level": "raid0", 00:38:33.828 "superblock": true, 00:38:33.828 "num_base_bdevs": 4, 00:38:33.828 "num_base_bdevs_discovered": 4, 00:38:33.828 "num_base_bdevs_operational": 4, 00:38:33.828 "base_bdevs_list": [ 00:38:33.828 { 00:38:33.829 "name": "BaseBdev1", 00:38:33.829 "uuid": "5ed08958-a4e4-5e9e-b3d0-f5759da52848", 00:38:33.829 "is_configured": true, 00:38:33.829 "data_offset": 2048, 00:38:33.829 "data_size": 63488 00:38:33.829 }, 00:38:33.829 { 00:38:33.829 "name": "BaseBdev2", 00:38:33.829 "uuid": "04e14a50-1c5a-5242-9552-1ecef6ea9499", 00:38:33.829 "is_configured": true, 00:38:33.829 "data_offset": 2048, 00:38:33.829 "data_size": 63488 00:38:33.829 }, 00:38:33.829 { 00:38:33.829 "name": "BaseBdev3", 00:38:33.829 "uuid": "26b70ee5-01a0-5b27-9d2c-986418714616", 00:38:33.829 "is_configured": true, 00:38:33.829 "data_offset": 2048, 00:38:33.829 "data_size": 63488 00:38:33.829 }, 00:38:33.829 { 00:38:33.829 "name": "BaseBdev4", 00:38:33.829 "uuid": "3d1757fe-5b82-56ca-8d94-7d3890ee4405", 00:38:33.829 "is_configured": true, 00:38:33.829 "data_offset": 2048, 00:38:33.829 "data_size": 63488 00:38:33.829 } 00:38:33.829 ] 00:38:33.829 }' 00:38:33.829 05:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:33.829 05:29:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:34.395 [2024-12-09 05:29:21.116241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:34.395 [2024-12-09 05:29:21.116294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:34.395 { 00:38:34.395 "results": [ 00:38:34.395 { 00:38:34.395 "job": "raid_bdev1", 00:38:34.395 "core_mask": "0x1", 00:38:34.395 "workload": "randrw", 00:38:34.395 "percentage": 50, 00:38:34.395 "status": "finished", 00:38:34.395 "queue_depth": 1, 00:38:34.395 "io_size": 131072, 00:38:34.395 "runtime": 1.416123, 00:38:34.395 "iops": 9426.441064794513, 00:38:34.395 "mibps": 1178.3051330993142, 00:38:34.395 "io_failed": 1, 00:38:34.395 "io_timeout": 0, 00:38:34.395 "avg_latency_us": 148.86542948586995, 00:38:34.395 "min_latency_us": 39.09818181818182, 00:38:34.395 "max_latency_us": 2174.6036363636363 00:38:34.395 } 00:38:34.395 ], 00:38:34.395 "core_count": 1 00:38:34.395 } 00:38:34.395 [2024-12-09 05:29:21.119934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:34.395 [2024-12-09 05:29:21.120012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:34.395 [2024-12-09 05:29:21.120077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:34.395 [2024-12-09 05:29:21.120097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71163 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71163 ']' 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71163 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71163 00:38:34.395 killing process with pid 71163 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:34.395 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71163' 00:38:34.396 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71163 00:38:34.396 [2024-12-09 05:29:21.155733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:34.396 05:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71163 00:38:34.653 [2024-12-09 05:29:21.480924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5pDJJaQYnw 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:38:36.029 00:38:36.029 real 0m5.187s 00:38:36.029 user 0m6.256s 00:38:36.029 sys 0m0.697s 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.029 ************************************ 00:38:36.029 END TEST raid_read_error_test 00:38:36.029 ************************************ 00:38:36.029 05:29:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:36.029 05:29:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:38:36.029 05:29:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:36.029 05:29:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.029 05:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:36.029 ************************************ 00:38:36.029 START TEST raid_write_error_test 00:38:36.029 ************************************ 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:38:36.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hrKuCdCKqS 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71314 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71314 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71314 ']' 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:36.029 05:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:36.288 [2024-12-09 05:29:23.012871] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:36.288 [2024-12-09 05:29:23.013084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71314 ] 00:38:36.288 [2024-12-09 05:29:23.203535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.546 [2024-12-09 05:29:23.347839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.804 [2024-12-09 05:29:23.562301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:36.804 [2024-12-09 05:29:23.562347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:37.062 05:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.062 05:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:38:37.062 05:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:37.062 05:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:37.062 05:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.062 05:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.062 BaseBdev1_malloc 00:38:37.062 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.063 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:38:37.063 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.063 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.330 true 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.330 [2024-12-09 05:29:24.047512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:38:37.330 [2024-12-09 05:29:24.047811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:37.330 [2024-12-09 05:29:24.047888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:38:37.330 [2024-12-09 05:29:24.048122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:37.330 [2024-12-09 05:29:24.051475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:37.330 [2024-12-09 05:29:24.051683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:37.330 BaseBdev1 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.330 BaseBdev2_malloc 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.330 true 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.330 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.330 [2024-12-09 05:29:24.111735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:38:37.330 [2024-12-09 05:29:24.111810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:37.330 [2024-12-09 05:29:24.111841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:38:37.330 [2024-12-09 05:29:24.111859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:37.330 [2024-12-09 05:29:24.114753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:37.331 [2024-12-09 05:29:24.114815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:37.331 BaseBdev2 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 BaseBdev3_malloc 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 true 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 [2024-12-09 05:29:24.186668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:38:37.331 [2024-12-09 05:29:24.186930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:37.331 [2024-12-09 05:29:24.186963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:37.331 [2024-12-09 05:29:24.186982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:37.331 [2024-12-09 05:29:24.190085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:37.331 [2024-12-09 05:29:24.190140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:37.331 BaseBdev3 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 BaseBdev4_malloc 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 true 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 [2024-12-09 05:29:24.245604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:38:37.331 [2024-12-09 05:29:24.245814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:37.331 [2024-12-09 05:29:24.245850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:37.331 [2024-12-09 05:29:24.245869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:37.331 [2024-12-09 05:29:24.248755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:37.331 [2024-12-09 05:29:24.248834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:37.331 BaseBdev4 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 [2024-12-09 05:29:24.253740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:37.331 [2024-12-09 05:29:24.256291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:37.331 [2024-12-09 05:29:24.256410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:37.331 [2024-12-09 05:29:24.256509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:37.331 [2024-12-09 05:29:24.256829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:38:37.331 [2024-12-09 05:29:24.256856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:37.331 [2024-12-09 05:29:24.257167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:38:37.331 [2024-12-09 05:29:24.257396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:38:37.331 [2024-12-09 05:29:24.257415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:38:37.331 [2024-12-09 05:29:24.257653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.331 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.589 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:37.589 "name": "raid_bdev1", 00:38:37.589 "uuid": "0cf08203-b26e-432c-acbd-61350fccbcab", 00:38:37.589 "strip_size_kb": 64, 00:38:37.589 "state": "online", 00:38:37.589 "raid_level": "raid0", 00:38:37.589 "superblock": true, 00:38:37.589 "num_base_bdevs": 4, 00:38:37.589 "num_base_bdevs_discovered": 4, 00:38:37.589 "num_base_bdevs_operational": 4, 00:38:37.589 "base_bdevs_list": [ 00:38:37.589 { 00:38:37.589 "name": "BaseBdev1", 00:38:37.589 "uuid": "a31a4e75-1d6b-5693-82ec-c89545b55cbc", 00:38:37.589 "is_configured": true, 00:38:37.589 "data_offset": 2048, 00:38:37.589 "data_size": 63488 00:38:37.589 }, 00:38:37.589 { 00:38:37.589 "name": "BaseBdev2", 00:38:37.589 "uuid": "bf8ef81c-16ec-553e-a1c7-08ea4604498b", 00:38:37.589 "is_configured": true, 00:38:37.589 "data_offset": 2048, 00:38:37.589 "data_size": 63488 00:38:37.589 }, 00:38:37.589 { 00:38:37.589 "name": "BaseBdev3", 00:38:37.589 "uuid": "8d0faa53-3c48-55fa-987b-b82a70208cec", 00:38:37.589 "is_configured": true, 00:38:37.589 "data_offset": 2048, 00:38:37.589 "data_size": 63488 00:38:37.589 }, 00:38:37.589 { 00:38:37.589 "name": "BaseBdev4", 00:38:37.589 "uuid": "2748a50e-8d1d-5ec1-bc2d-46d22187f3ae", 00:38:37.589 "is_configured": true, 00:38:37.589 "data_offset": 2048, 00:38:37.589 "data_size": 63488 00:38:37.589 } 00:38:37.589 ] 00:38:37.589 }' 00:38:37.589 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:37.589 05:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:37.847 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:38:37.847 05:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:38.106 [2024-12-09 05:29:24.923917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:39.039 "name": "raid_bdev1", 00:38:39.039 "uuid": "0cf08203-b26e-432c-acbd-61350fccbcab", 00:38:39.039 "strip_size_kb": 64, 00:38:39.039 "state": "online", 00:38:39.039 "raid_level": "raid0", 00:38:39.039 "superblock": true, 00:38:39.039 "num_base_bdevs": 4, 00:38:39.039 "num_base_bdevs_discovered": 4, 00:38:39.039 "num_base_bdevs_operational": 4, 00:38:39.039 "base_bdevs_list": [ 00:38:39.039 { 00:38:39.039 "name": "BaseBdev1", 00:38:39.039 "uuid": "a31a4e75-1d6b-5693-82ec-c89545b55cbc", 00:38:39.039 "is_configured": true, 00:38:39.039 "data_offset": 2048, 00:38:39.039 "data_size": 63488 00:38:39.039 }, 00:38:39.039 { 00:38:39.039 "name": "BaseBdev2", 00:38:39.039 "uuid": "bf8ef81c-16ec-553e-a1c7-08ea4604498b", 00:38:39.039 "is_configured": true, 00:38:39.039 "data_offset": 2048, 00:38:39.039 "data_size": 63488 00:38:39.039 }, 00:38:39.039 { 00:38:39.039 "name": "BaseBdev3", 00:38:39.039 "uuid": "8d0faa53-3c48-55fa-987b-b82a70208cec", 00:38:39.039 "is_configured": true, 00:38:39.039 "data_offset": 2048, 00:38:39.039 "data_size": 63488 00:38:39.039 }, 00:38:39.039 { 00:38:39.039 "name": "BaseBdev4", 00:38:39.039 "uuid": "2748a50e-8d1d-5ec1-bc2d-46d22187f3ae", 00:38:39.039 "is_configured": true, 00:38:39.039 "data_offset": 2048, 00:38:39.039 "data_size": 63488 00:38:39.039 } 00:38:39.039 ] 00:38:39.039 }' 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:39.039 05:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:39.606 [2024-12-09 05:29:26.339401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:39.606 [2024-12-09 05:29:26.339440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:39.606 { 00:38:39.606 "results": [ 00:38:39.606 { 00:38:39.606 "job": "raid_bdev1", 00:38:39.606 "core_mask": "0x1", 00:38:39.606 "workload": "randrw", 00:38:39.606 "percentage": 50, 00:38:39.606 "status": "finished", 00:38:39.606 "queue_depth": 1, 00:38:39.606 "io_size": 131072, 00:38:39.606 "runtime": 1.412534, 00:38:39.606 "iops": 9139.603011325746, 00:38:39.606 "mibps": 1142.4503764157182, 00:38:39.606 "io_failed": 1, 00:38:39.606 "io_timeout": 0, 00:38:39.606 "avg_latency_us": 153.53330563789865, 00:38:39.606 "min_latency_us": 35.374545454545455, 00:38:39.606 "max_latency_us": 2025.658181818182 00:38:39.606 } 00:38:39.606 ], 00:38:39.606 "core_count": 1 00:38:39.606 } 00:38:39.606 [2024-12-09 05:29:26.343101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:39.606 [2024-12-09 05:29:26.343223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:39.606 [2024-12-09 05:29:26.343294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:39.606 [2024-12-09 05:29:26.343323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71314 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71314 ']' 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71314 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71314 00:38:39.606 killing process with pid 71314 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71314' 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71314 00:38:39.606 [2024-12-09 05:29:26.378128] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:39.606 05:29:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71314 00:38:39.865 [2024-12-09 05:29:26.687556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hrKuCdCKqS 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:38:41.270 00:38:41.270 real 0m5.067s 00:38:41.270 user 0m6.148s 00:38:41.270 sys 0m0.694s 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.270 05:29:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:38:41.270 ************************************ 00:38:41.270 END TEST raid_write_error_test 00:38:41.270 ************************************ 00:38:41.270 05:29:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:38:41.270 05:29:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:38:41.270 05:29:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:41.270 05:29:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.270 05:29:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:41.270 ************************************ 00:38:41.270 START TEST raid_state_function_test 00:38:41.270 ************************************ 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:41.270 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71458 00:38:41.271 Process raid pid: 71458 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71458' 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71458 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71458 ']' 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.271 05:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:41.271 [2024-12-09 05:29:28.126404] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:41.271 [2024-12-09 05:29:28.126896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.529 [2024-12-09 05:29:28.325484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.787 [2024-12-09 05:29:28.509417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.788 [2024-12-09 05:29:28.722683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:41.788 [2024-12-09 05:29:28.723019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.353 [2024-12-09 05:29:29.067995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:42.353 [2024-12-09 05:29:29.068270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:42.353 [2024-12-09 05:29:29.068390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:42.353 [2024-12-09 05:29:29.068424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:42.353 [2024-12-09 05:29:29.068436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:42.353 [2024-12-09 05:29:29.068450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:42.353 [2024-12-09 05:29:29.068459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:42.353 [2024-12-09 05:29:29.068472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:42.353 "name": "Existed_Raid", 00:38:42.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.353 "strip_size_kb": 64, 00:38:42.353 "state": "configuring", 00:38:42.353 "raid_level": "concat", 00:38:42.353 "superblock": false, 00:38:42.353 "num_base_bdevs": 4, 00:38:42.353 "num_base_bdevs_discovered": 0, 00:38:42.353 "num_base_bdevs_operational": 4, 00:38:42.353 "base_bdevs_list": [ 00:38:42.353 { 00:38:42.353 "name": "BaseBdev1", 00:38:42.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.353 "is_configured": false, 00:38:42.353 "data_offset": 0, 00:38:42.353 "data_size": 0 00:38:42.353 }, 00:38:42.353 { 00:38:42.353 "name": "BaseBdev2", 00:38:42.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.353 "is_configured": false, 00:38:42.353 "data_offset": 0, 00:38:42.353 "data_size": 0 00:38:42.353 }, 00:38:42.353 { 00:38:42.353 "name": "BaseBdev3", 00:38:42.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.353 "is_configured": false, 00:38:42.353 "data_offset": 0, 00:38:42.353 "data_size": 0 00:38:42.353 }, 00:38:42.353 { 00:38:42.353 "name": "BaseBdev4", 00:38:42.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.353 "is_configured": false, 00:38:42.353 "data_offset": 0, 00:38:42.353 "data_size": 0 00:38:42.353 } 00:38:42.353 ] 00:38:42.353 }' 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:42.353 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.611 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:42.611 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.611 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.869 [2024-12-09 05:29:29.584177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:42.869 [2024-12-09 05:29:29.584433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.869 [2024-12-09 05:29:29.592199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:42.869 [2024-12-09 05:29:29.592462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:42.869 [2024-12-09 05:29:29.592577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:42.869 [2024-12-09 05:29:29.592634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:42.869 [2024-12-09 05:29:29.592920] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:42.869 [2024-12-09 05:29:29.592980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:42.869 [2024-12-09 05:29:29.593087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:42.869 [2024-12-09 05:29:29.593145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.869 [2024-12-09 05:29:29.636224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:42.869 BaseBdev1 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.869 [ 00:38:42.869 { 00:38:42.869 "name": "BaseBdev1", 00:38:42.869 "aliases": [ 00:38:42.869 "ed17e5d6-d111-4946-9aaf-39e6ebaca251" 00:38:42.869 ], 00:38:42.869 "product_name": "Malloc disk", 00:38:42.869 "block_size": 512, 00:38:42.869 "num_blocks": 65536, 00:38:42.869 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:42.869 "assigned_rate_limits": { 00:38:42.869 "rw_ios_per_sec": 0, 00:38:42.869 "rw_mbytes_per_sec": 0, 00:38:42.869 "r_mbytes_per_sec": 0, 00:38:42.869 "w_mbytes_per_sec": 0 00:38:42.869 }, 00:38:42.869 "claimed": true, 00:38:42.869 "claim_type": "exclusive_write", 00:38:42.869 "zoned": false, 00:38:42.869 "supported_io_types": { 00:38:42.869 "read": true, 00:38:42.869 "write": true, 00:38:42.869 "unmap": true, 00:38:42.869 "flush": true, 00:38:42.869 "reset": true, 00:38:42.869 "nvme_admin": false, 00:38:42.869 "nvme_io": false, 00:38:42.869 "nvme_io_md": false, 00:38:42.869 "write_zeroes": true, 00:38:42.869 "zcopy": true, 00:38:42.869 "get_zone_info": false, 00:38:42.869 "zone_management": false, 00:38:42.869 "zone_append": false, 00:38:42.869 "compare": false, 00:38:42.869 "compare_and_write": false, 00:38:42.869 "abort": true, 00:38:42.869 "seek_hole": false, 00:38:42.869 "seek_data": false, 00:38:42.869 "copy": true, 00:38:42.869 "nvme_iov_md": false 00:38:42.869 }, 00:38:42.869 "memory_domains": [ 00:38:42.869 { 00:38:42.869 "dma_device_id": "system", 00:38:42.869 "dma_device_type": 1 00:38:42.869 }, 00:38:42.869 { 00:38:42.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:42.869 "dma_device_type": 2 00:38:42.869 } 00:38:42.869 ], 00:38:42.869 "driver_specific": {} 00:38:42.869 } 00:38:42.869 ] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.869 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:42.869 "name": "Existed_Raid", 00:38:42.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.870 "strip_size_kb": 64, 00:38:42.870 "state": "configuring", 00:38:42.870 "raid_level": "concat", 00:38:42.870 "superblock": false, 00:38:42.870 "num_base_bdevs": 4, 00:38:42.870 "num_base_bdevs_discovered": 1, 00:38:42.870 "num_base_bdevs_operational": 4, 00:38:42.870 "base_bdevs_list": [ 00:38:42.870 { 00:38:42.870 "name": "BaseBdev1", 00:38:42.870 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:42.870 "is_configured": true, 00:38:42.870 "data_offset": 0, 00:38:42.870 "data_size": 65536 00:38:42.870 }, 00:38:42.870 { 00:38:42.870 "name": "BaseBdev2", 00:38:42.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.870 "is_configured": false, 00:38:42.870 "data_offset": 0, 00:38:42.870 "data_size": 0 00:38:42.870 }, 00:38:42.870 { 00:38:42.870 "name": "BaseBdev3", 00:38:42.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.870 "is_configured": false, 00:38:42.870 "data_offset": 0, 00:38:42.870 "data_size": 0 00:38:42.870 }, 00:38:42.870 { 00:38:42.870 "name": "BaseBdev4", 00:38:42.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.870 "is_configured": false, 00:38:42.870 "data_offset": 0, 00:38:42.870 "data_size": 0 00:38:42.870 } 00:38:42.870 ] 00:38:42.870 }' 00:38:42.870 05:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:42.870 05:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:43.436 [2024-12-09 05:29:30.188518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:43.436 [2024-12-09 05:29:30.188586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:43.436 [2024-12-09 05:29:30.196582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:43.436 [2024-12-09 05:29:30.199319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:43.436 [2024-12-09 05:29:30.199534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:43.436 [2024-12-09 05:29:30.199561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:43.436 [2024-12-09 05:29:30.199580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:43.436 [2024-12-09 05:29:30.199591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:43.436 [2024-12-09 05:29:30.199604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:43.436 "name": "Existed_Raid", 00:38:43.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.436 "strip_size_kb": 64, 00:38:43.436 "state": "configuring", 00:38:43.436 "raid_level": "concat", 00:38:43.436 "superblock": false, 00:38:43.436 "num_base_bdevs": 4, 00:38:43.436 "num_base_bdevs_discovered": 1, 00:38:43.436 "num_base_bdevs_operational": 4, 00:38:43.436 "base_bdevs_list": [ 00:38:43.436 { 00:38:43.436 "name": "BaseBdev1", 00:38:43.436 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:43.436 "is_configured": true, 00:38:43.436 "data_offset": 0, 00:38:43.436 "data_size": 65536 00:38:43.436 }, 00:38:43.436 { 00:38:43.436 "name": "BaseBdev2", 00:38:43.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.436 "is_configured": false, 00:38:43.436 "data_offset": 0, 00:38:43.436 "data_size": 0 00:38:43.436 }, 00:38:43.436 { 00:38:43.436 "name": "BaseBdev3", 00:38:43.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.436 "is_configured": false, 00:38:43.436 "data_offset": 0, 00:38:43.436 "data_size": 0 00:38:43.436 }, 00:38:43.436 { 00:38:43.436 "name": "BaseBdev4", 00:38:43.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.436 "is_configured": false, 00:38:43.436 "data_offset": 0, 00:38:43.436 "data_size": 0 00:38:43.436 } 00:38:43.436 ] 00:38:43.436 }' 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:43.436 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.002 [2024-12-09 05:29:30.745499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:44.002 BaseBdev2 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.002 [ 00:38:44.002 { 00:38:44.002 "name": "BaseBdev2", 00:38:44.002 "aliases": [ 00:38:44.002 "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8" 00:38:44.002 ], 00:38:44.002 "product_name": "Malloc disk", 00:38:44.002 "block_size": 512, 00:38:44.002 "num_blocks": 65536, 00:38:44.002 "uuid": "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8", 00:38:44.002 "assigned_rate_limits": { 00:38:44.002 "rw_ios_per_sec": 0, 00:38:44.002 "rw_mbytes_per_sec": 0, 00:38:44.002 "r_mbytes_per_sec": 0, 00:38:44.002 "w_mbytes_per_sec": 0 00:38:44.002 }, 00:38:44.002 "claimed": true, 00:38:44.002 "claim_type": "exclusive_write", 00:38:44.002 "zoned": false, 00:38:44.002 "supported_io_types": { 00:38:44.002 "read": true, 00:38:44.002 "write": true, 00:38:44.002 "unmap": true, 00:38:44.002 "flush": true, 00:38:44.002 "reset": true, 00:38:44.002 "nvme_admin": false, 00:38:44.002 "nvme_io": false, 00:38:44.002 "nvme_io_md": false, 00:38:44.002 "write_zeroes": true, 00:38:44.002 "zcopy": true, 00:38:44.002 "get_zone_info": false, 00:38:44.002 "zone_management": false, 00:38:44.002 "zone_append": false, 00:38:44.002 "compare": false, 00:38:44.002 "compare_and_write": false, 00:38:44.002 "abort": true, 00:38:44.002 "seek_hole": false, 00:38:44.002 "seek_data": false, 00:38:44.002 "copy": true, 00:38:44.002 "nvme_iov_md": false 00:38:44.002 }, 00:38:44.002 "memory_domains": [ 00:38:44.002 { 00:38:44.002 "dma_device_id": "system", 00:38:44.002 "dma_device_type": 1 00:38:44.002 }, 00:38:44.002 { 00:38:44.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:44.002 "dma_device_type": 2 00:38:44.002 } 00:38:44.002 ], 00:38:44.002 "driver_specific": {} 00:38:44.002 } 00:38:44.002 ] 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.002 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:44.002 "name": "Existed_Raid", 00:38:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.002 "strip_size_kb": 64, 00:38:44.002 "state": "configuring", 00:38:44.002 "raid_level": "concat", 00:38:44.002 "superblock": false, 00:38:44.002 "num_base_bdevs": 4, 00:38:44.002 "num_base_bdevs_discovered": 2, 00:38:44.002 "num_base_bdevs_operational": 4, 00:38:44.002 "base_bdevs_list": [ 00:38:44.002 { 00:38:44.002 "name": "BaseBdev1", 00:38:44.002 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:44.002 "is_configured": true, 00:38:44.002 "data_offset": 0, 00:38:44.002 "data_size": 65536 00:38:44.002 }, 00:38:44.002 { 00:38:44.002 "name": "BaseBdev2", 00:38:44.002 "uuid": "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8", 00:38:44.002 "is_configured": true, 00:38:44.002 "data_offset": 0, 00:38:44.002 "data_size": 65536 00:38:44.002 }, 00:38:44.002 { 00:38:44.002 "name": "BaseBdev3", 00:38:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.002 "is_configured": false, 00:38:44.002 "data_offset": 0, 00:38:44.002 "data_size": 0 00:38:44.002 }, 00:38:44.002 { 00:38:44.002 "name": "BaseBdev4", 00:38:44.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.003 "is_configured": false, 00:38:44.003 "data_offset": 0, 00:38:44.003 "data_size": 0 00:38:44.003 } 00:38:44.003 ] 00:38:44.003 }' 00:38:44.003 05:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:44.003 05:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.594 [2024-12-09 05:29:31.349238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:44.594 BaseBdev3 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.594 [ 00:38:44.594 { 00:38:44.594 "name": "BaseBdev3", 00:38:44.594 "aliases": [ 00:38:44.594 "aadc2785-b1d9-41c9-a984-6b139d6f3557" 00:38:44.594 ], 00:38:44.594 "product_name": "Malloc disk", 00:38:44.594 "block_size": 512, 00:38:44.594 "num_blocks": 65536, 00:38:44.594 "uuid": "aadc2785-b1d9-41c9-a984-6b139d6f3557", 00:38:44.594 "assigned_rate_limits": { 00:38:44.594 "rw_ios_per_sec": 0, 00:38:44.594 "rw_mbytes_per_sec": 0, 00:38:44.594 "r_mbytes_per_sec": 0, 00:38:44.594 "w_mbytes_per_sec": 0 00:38:44.594 }, 00:38:44.594 "claimed": true, 00:38:44.594 "claim_type": "exclusive_write", 00:38:44.594 "zoned": false, 00:38:44.594 "supported_io_types": { 00:38:44.594 "read": true, 00:38:44.594 "write": true, 00:38:44.594 "unmap": true, 00:38:44.594 "flush": true, 00:38:44.594 "reset": true, 00:38:44.594 "nvme_admin": false, 00:38:44.594 "nvme_io": false, 00:38:44.594 "nvme_io_md": false, 00:38:44.594 "write_zeroes": true, 00:38:44.594 "zcopy": true, 00:38:44.594 "get_zone_info": false, 00:38:44.594 "zone_management": false, 00:38:44.594 "zone_append": false, 00:38:44.594 "compare": false, 00:38:44.594 "compare_and_write": false, 00:38:44.594 "abort": true, 00:38:44.594 "seek_hole": false, 00:38:44.594 "seek_data": false, 00:38:44.594 "copy": true, 00:38:44.594 "nvme_iov_md": false 00:38:44.594 }, 00:38:44.594 "memory_domains": [ 00:38:44.594 { 00:38:44.594 "dma_device_id": "system", 00:38:44.594 "dma_device_type": 1 00:38:44.594 }, 00:38:44.594 { 00:38:44.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:44.594 "dma_device_type": 2 00:38:44.594 } 00:38:44.594 ], 00:38:44.594 "driver_specific": {} 00:38:44.594 } 00:38:44.594 ] 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:44.594 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:44.595 "name": "Existed_Raid", 00:38:44.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.595 "strip_size_kb": 64, 00:38:44.595 "state": "configuring", 00:38:44.595 "raid_level": "concat", 00:38:44.595 "superblock": false, 00:38:44.595 "num_base_bdevs": 4, 00:38:44.595 "num_base_bdevs_discovered": 3, 00:38:44.595 "num_base_bdevs_operational": 4, 00:38:44.595 "base_bdevs_list": [ 00:38:44.595 { 00:38:44.595 "name": "BaseBdev1", 00:38:44.595 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:44.595 "is_configured": true, 00:38:44.595 "data_offset": 0, 00:38:44.595 "data_size": 65536 00:38:44.595 }, 00:38:44.595 { 00:38:44.595 "name": "BaseBdev2", 00:38:44.595 "uuid": "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8", 00:38:44.595 "is_configured": true, 00:38:44.595 "data_offset": 0, 00:38:44.595 "data_size": 65536 00:38:44.595 }, 00:38:44.595 { 00:38:44.595 "name": "BaseBdev3", 00:38:44.595 "uuid": "aadc2785-b1d9-41c9-a984-6b139d6f3557", 00:38:44.595 "is_configured": true, 00:38:44.595 "data_offset": 0, 00:38:44.595 "data_size": 65536 00:38:44.595 }, 00:38:44.595 { 00:38:44.595 "name": "BaseBdev4", 00:38:44.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.595 "is_configured": false, 00:38:44.595 "data_offset": 0, 00:38:44.595 "data_size": 0 00:38:44.595 } 00:38:44.595 ] 00:38:44.595 }' 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:44.595 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.162 [2024-12-09 05:29:31.932810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:45.162 [2024-12-09 05:29:31.932868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:45.162 [2024-12-09 05:29:31.932880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:38:45.162 [2024-12-09 05:29:31.933241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:45.162 [2024-12-09 05:29:31.933452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:45.162 [2024-12-09 05:29:31.933477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:38:45.162 [2024-12-09 05:29:31.933759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:45.162 BaseBdev4 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.162 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.162 [ 00:38:45.162 { 00:38:45.162 "name": "BaseBdev4", 00:38:45.162 "aliases": [ 00:38:45.162 "14994193-8be0-4f97-bb76-7457fd9f8b04" 00:38:45.162 ], 00:38:45.162 "product_name": "Malloc disk", 00:38:45.162 "block_size": 512, 00:38:45.162 "num_blocks": 65536, 00:38:45.162 "uuid": "14994193-8be0-4f97-bb76-7457fd9f8b04", 00:38:45.162 "assigned_rate_limits": { 00:38:45.162 "rw_ios_per_sec": 0, 00:38:45.162 "rw_mbytes_per_sec": 0, 00:38:45.162 "r_mbytes_per_sec": 0, 00:38:45.162 "w_mbytes_per_sec": 0 00:38:45.162 }, 00:38:45.162 "claimed": true, 00:38:45.162 "claim_type": "exclusive_write", 00:38:45.162 "zoned": false, 00:38:45.162 "supported_io_types": { 00:38:45.162 "read": true, 00:38:45.162 "write": true, 00:38:45.162 "unmap": true, 00:38:45.162 "flush": true, 00:38:45.162 "reset": true, 00:38:45.162 "nvme_admin": false, 00:38:45.162 "nvme_io": false, 00:38:45.162 "nvme_io_md": false, 00:38:45.162 "write_zeroes": true, 00:38:45.162 "zcopy": true, 00:38:45.162 "get_zone_info": false, 00:38:45.162 "zone_management": false, 00:38:45.162 "zone_append": false, 00:38:45.162 "compare": false, 00:38:45.162 "compare_and_write": false, 00:38:45.162 "abort": true, 00:38:45.162 "seek_hole": false, 00:38:45.162 "seek_data": false, 00:38:45.162 "copy": true, 00:38:45.162 "nvme_iov_md": false 00:38:45.162 }, 00:38:45.162 "memory_domains": [ 00:38:45.162 { 00:38:45.162 "dma_device_id": "system", 00:38:45.162 "dma_device_type": 1 00:38:45.162 }, 00:38:45.162 { 00:38:45.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:45.163 "dma_device_type": 2 00:38:45.163 } 00:38:45.163 ], 00:38:45.163 "driver_specific": {} 00:38:45.163 } 00:38:45.163 ] 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.163 05:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.163 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:45.163 "name": "Existed_Raid", 00:38:45.163 "uuid": "7844dc2a-08ab-448d-9196-839027404179", 00:38:45.163 "strip_size_kb": 64, 00:38:45.163 "state": "online", 00:38:45.163 "raid_level": "concat", 00:38:45.163 "superblock": false, 00:38:45.163 "num_base_bdevs": 4, 00:38:45.163 "num_base_bdevs_discovered": 4, 00:38:45.163 "num_base_bdevs_operational": 4, 00:38:45.163 "base_bdevs_list": [ 00:38:45.163 { 00:38:45.163 "name": "BaseBdev1", 00:38:45.163 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:45.163 "is_configured": true, 00:38:45.163 "data_offset": 0, 00:38:45.163 "data_size": 65536 00:38:45.163 }, 00:38:45.163 { 00:38:45.163 "name": "BaseBdev2", 00:38:45.163 "uuid": "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8", 00:38:45.163 "is_configured": true, 00:38:45.163 "data_offset": 0, 00:38:45.163 "data_size": 65536 00:38:45.163 }, 00:38:45.163 { 00:38:45.163 "name": "BaseBdev3", 00:38:45.163 "uuid": "aadc2785-b1d9-41c9-a984-6b139d6f3557", 00:38:45.163 "is_configured": true, 00:38:45.163 "data_offset": 0, 00:38:45.163 "data_size": 65536 00:38:45.163 }, 00:38:45.163 { 00:38:45.163 "name": "BaseBdev4", 00:38:45.163 "uuid": "14994193-8be0-4f97-bb76-7457fd9f8b04", 00:38:45.163 "is_configured": true, 00:38:45.163 "data_offset": 0, 00:38:45.163 "data_size": 65536 00:38:45.163 } 00:38:45.163 ] 00:38:45.163 }' 00:38:45.163 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:45.163 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.729 [2024-12-09 05:29:32.497371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.729 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:45.730 "name": "Existed_Raid", 00:38:45.730 "aliases": [ 00:38:45.730 "7844dc2a-08ab-448d-9196-839027404179" 00:38:45.730 ], 00:38:45.730 "product_name": "Raid Volume", 00:38:45.730 "block_size": 512, 00:38:45.730 "num_blocks": 262144, 00:38:45.730 "uuid": "7844dc2a-08ab-448d-9196-839027404179", 00:38:45.730 "assigned_rate_limits": { 00:38:45.730 "rw_ios_per_sec": 0, 00:38:45.730 "rw_mbytes_per_sec": 0, 00:38:45.730 "r_mbytes_per_sec": 0, 00:38:45.730 "w_mbytes_per_sec": 0 00:38:45.730 }, 00:38:45.730 "claimed": false, 00:38:45.730 "zoned": false, 00:38:45.730 "supported_io_types": { 00:38:45.730 "read": true, 00:38:45.730 "write": true, 00:38:45.730 "unmap": true, 00:38:45.730 "flush": true, 00:38:45.730 "reset": true, 00:38:45.730 "nvme_admin": false, 00:38:45.730 "nvme_io": false, 00:38:45.730 "nvme_io_md": false, 00:38:45.730 "write_zeroes": true, 00:38:45.730 "zcopy": false, 00:38:45.730 "get_zone_info": false, 00:38:45.730 "zone_management": false, 00:38:45.730 "zone_append": false, 00:38:45.730 "compare": false, 00:38:45.730 "compare_and_write": false, 00:38:45.730 "abort": false, 00:38:45.730 "seek_hole": false, 00:38:45.730 "seek_data": false, 00:38:45.730 "copy": false, 00:38:45.730 "nvme_iov_md": false 00:38:45.730 }, 00:38:45.730 "memory_domains": [ 00:38:45.730 { 00:38:45.730 "dma_device_id": "system", 00:38:45.730 "dma_device_type": 1 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:45.730 "dma_device_type": 2 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "system", 00:38:45.730 "dma_device_type": 1 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:45.730 "dma_device_type": 2 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "system", 00:38:45.730 "dma_device_type": 1 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:45.730 "dma_device_type": 2 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "system", 00:38:45.730 "dma_device_type": 1 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:45.730 "dma_device_type": 2 00:38:45.730 } 00:38:45.730 ], 00:38:45.730 "driver_specific": { 00:38:45.730 "raid": { 00:38:45.730 "uuid": "7844dc2a-08ab-448d-9196-839027404179", 00:38:45.730 "strip_size_kb": 64, 00:38:45.730 "state": "online", 00:38:45.730 "raid_level": "concat", 00:38:45.730 "superblock": false, 00:38:45.730 "num_base_bdevs": 4, 00:38:45.730 "num_base_bdevs_discovered": 4, 00:38:45.730 "num_base_bdevs_operational": 4, 00:38:45.730 "base_bdevs_list": [ 00:38:45.730 { 00:38:45.730 "name": "BaseBdev1", 00:38:45.730 "uuid": "ed17e5d6-d111-4946-9aaf-39e6ebaca251", 00:38:45.730 "is_configured": true, 00:38:45.730 "data_offset": 0, 00:38:45.730 "data_size": 65536 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "name": "BaseBdev2", 00:38:45.730 "uuid": "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8", 00:38:45.730 "is_configured": true, 00:38:45.730 "data_offset": 0, 00:38:45.730 "data_size": 65536 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "name": "BaseBdev3", 00:38:45.730 "uuid": "aadc2785-b1d9-41c9-a984-6b139d6f3557", 00:38:45.730 "is_configured": true, 00:38:45.730 "data_offset": 0, 00:38:45.730 "data_size": 65536 00:38:45.730 }, 00:38:45.730 { 00:38:45.730 "name": "BaseBdev4", 00:38:45.730 "uuid": "14994193-8be0-4f97-bb76-7457fd9f8b04", 00:38:45.730 "is_configured": true, 00:38:45.730 "data_offset": 0, 00:38:45.730 "data_size": 65536 00:38:45.730 } 00:38:45.730 ] 00:38:45.730 } 00:38:45.730 } 00:38:45.730 }' 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:45.730 BaseBdev2 00:38:45.730 BaseBdev3 00:38:45.730 BaseBdev4' 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.730 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:45.989 [2024-12-09 05:29:32.873185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:45.989 [2024-12-09 05:29:32.873221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:45.989 [2024-12-09 05:29:32.873281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:45.989 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.248 05:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.248 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:46.248 "name": "Existed_Raid", 00:38:46.248 "uuid": "7844dc2a-08ab-448d-9196-839027404179", 00:38:46.248 "strip_size_kb": 64, 00:38:46.248 "state": "offline", 00:38:46.248 "raid_level": "concat", 00:38:46.248 "superblock": false, 00:38:46.248 "num_base_bdevs": 4, 00:38:46.248 "num_base_bdevs_discovered": 3, 00:38:46.248 "num_base_bdevs_operational": 3, 00:38:46.248 "base_bdevs_list": [ 00:38:46.248 { 00:38:46.248 "name": null, 00:38:46.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.248 "is_configured": false, 00:38:46.248 "data_offset": 0, 00:38:46.248 "data_size": 65536 00:38:46.248 }, 00:38:46.248 { 00:38:46.248 "name": "BaseBdev2", 00:38:46.248 "uuid": "e9da9ca3-e6d9-4cb2-bf18-55a2da9674c8", 00:38:46.248 "is_configured": true, 00:38:46.248 "data_offset": 0, 00:38:46.248 "data_size": 65536 00:38:46.248 }, 00:38:46.248 { 00:38:46.248 "name": "BaseBdev3", 00:38:46.248 "uuid": "aadc2785-b1d9-41c9-a984-6b139d6f3557", 00:38:46.248 "is_configured": true, 00:38:46.248 "data_offset": 0, 00:38:46.248 "data_size": 65536 00:38:46.248 }, 00:38:46.248 { 00:38:46.248 "name": "BaseBdev4", 00:38:46.248 "uuid": "14994193-8be0-4f97-bb76-7457fd9f8b04", 00:38:46.248 "is_configured": true, 00:38:46.248 "data_offset": 0, 00:38:46.248 "data_size": 65536 00:38:46.248 } 00:38:46.248 ] 00:38:46.248 }' 00:38:46.248 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:46.248 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.507 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:46.507 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:46.507 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:46.507 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:46.507 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.507 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.765 [2024-12-09 05:29:33.523700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.765 [2024-12-09 05:29:33.658078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:46.765 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.026 [2024-12-09 05:29:33.792667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:38:47.026 [2024-12-09 05:29:33.792726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.026 BaseBdev2 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.026 05:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.026 [ 00:38:47.026 { 00:38:47.026 "name": "BaseBdev2", 00:38:47.026 "aliases": [ 00:38:47.026 "a21a57d9-7961-43a0-9ab7-918cdcd9b5be" 00:38:47.026 ], 00:38:47.026 "product_name": "Malloc disk", 00:38:47.026 "block_size": 512, 00:38:47.026 "num_blocks": 65536, 00:38:47.026 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:47.026 "assigned_rate_limits": { 00:38:47.026 "rw_ios_per_sec": 0, 00:38:47.026 "rw_mbytes_per_sec": 0, 00:38:47.026 "r_mbytes_per_sec": 0, 00:38:47.026 "w_mbytes_per_sec": 0 00:38:47.026 }, 00:38:47.026 "claimed": false, 00:38:47.026 "zoned": false, 00:38:47.026 "supported_io_types": { 00:38:47.026 "read": true, 00:38:47.026 "write": true, 00:38:47.026 "unmap": true, 00:38:47.026 "flush": true, 00:38:47.285 "reset": true, 00:38:47.285 "nvme_admin": false, 00:38:47.285 "nvme_io": false, 00:38:47.285 "nvme_io_md": false, 00:38:47.285 "write_zeroes": true, 00:38:47.285 "zcopy": true, 00:38:47.285 "get_zone_info": false, 00:38:47.285 "zone_management": false, 00:38:47.285 "zone_append": false, 00:38:47.285 "compare": false, 00:38:47.285 "compare_and_write": false, 00:38:47.285 "abort": true, 00:38:47.285 "seek_hole": false, 00:38:47.285 "seek_data": false, 00:38:47.285 "copy": true, 00:38:47.285 "nvme_iov_md": false 00:38:47.285 }, 00:38:47.285 "memory_domains": [ 00:38:47.285 { 00:38:47.285 "dma_device_id": "system", 00:38:47.285 "dma_device_type": 1 00:38:47.285 }, 00:38:47.285 { 00:38:47.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:47.285 "dma_device_type": 2 00:38:47.285 } 00:38:47.285 ], 00:38:47.285 "driver_specific": {} 00:38:47.285 } 00:38:47.285 ] 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.285 BaseBdev3 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.285 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.285 [ 00:38:47.285 { 00:38:47.285 "name": "BaseBdev3", 00:38:47.285 "aliases": [ 00:38:47.285 "727f2d85-6723-4199-98e4-b76d72e27a2d" 00:38:47.285 ], 00:38:47.285 "product_name": "Malloc disk", 00:38:47.285 "block_size": 512, 00:38:47.285 "num_blocks": 65536, 00:38:47.285 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:47.285 "assigned_rate_limits": { 00:38:47.285 "rw_ios_per_sec": 0, 00:38:47.285 "rw_mbytes_per_sec": 0, 00:38:47.285 "r_mbytes_per_sec": 0, 00:38:47.285 "w_mbytes_per_sec": 0 00:38:47.285 }, 00:38:47.285 "claimed": false, 00:38:47.285 "zoned": false, 00:38:47.285 "supported_io_types": { 00:38:47.285 "read": true, 00:38:47.285 "write": true, 00:38:47.285 "unmap": true, 00:38:47.285 "flush": true, 00:38:47.285 "reset": true, 00:38:47.285 "nvme_admin": false, 00:38:47.285 "nvme_io": false, 00:38:47.285 "nvme_io_md": false, 00:38:47.285 "write_zeroes": true, 00:38:47.285 "zcopy": true, 00:38:47.285 "get_zone_info": false, 00:38:47.286 "zone_management": false, 00:38:47.286 "zone_append": false, 00:38:47.286 "compare": false, 00:38:47.286 "compare_and_write": false, 00:38:47.286 "abort": true, 00:38:47.286 "seek_hole": false, 00:38:47.286 "seek_data": false, 00:38:47.286 "copy": true, 00:38:47.286 "nvme_iov_md": false 00:38:47.286 }, 00:38:47.286 "memory_domains": [ 00:38:47.286 { 00:38:47.286 "dma_device_id": "system", 00:38:47.286 "dma_device_type": 1 00:38:47.286 }, 00:38:47.286 { 00:38:47.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:47.286 "dma_device_type": 2 00:38:47.286 } 00:38:47.286 ], 00:38:47.286 "driver_specific": {} 00:38:47.286 } 00:38:47.286 ] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.286 BaseBdev4 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.286 [ 00:38:47.286 { 00:38:47.286 "name": "BaseBdev4", 00:38:47.286 "aliases": [ 00:38:47.286 "f3844853-6e7f-4f20-9232-d7eff9ff65ff" 00:38:47.286 ], 00:38:47.286 "product_name": "Malloc disk", 00:38:47.286 "block_size": 512, 00:38:47.286 "num_blocks": 65536, 00:38:47.286 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:47.286 "assigned_rate_limits": { 00:38:47.286 "rw_ios_per_sec": 0, 00:38:47.286 "rw_mbytes_per_sec": 0, 00:38:47.286 "r_mbytes_per_sec": 0, 00:38:47.286 "w_mbytes_per_sec": 0 00:38:47.286 }, 00:38:47.286 "claimed": false, 00:38:47.286 "zoned": false, 00:38:47.286 "supported_io_types": { 00:38:47.286 "read": true, 00:38:47.286 "write": true, 00:38:47.286 "unmap": true, 00:38:47.286 "flush": true, 00:38:47.286 "reset": true, 00:38:47.286 "nvme_admin": false, 00:38:47.286 "nvme_io": false, 00:38:47.286 "nvme_io_md": false, 00:38:47.286 "write_zeroes": true, 00:38:47.286 "zcopy": true, 00:38:47.286 "get_zone_info": false, 00:38:47.286 "zone_management": false, 00:38:47.286 "zone_append": false, 00:38:47.286 "compare": false, 00:38:47.286 "compare_and_write": false, 00:38:47.286 "abort": true, 00:38:47.286 "seek_hole": false, 00:38:47.286 "seek_data": false, 00:38:47.286 "copy": true, 00:38:47.286 "nvme_iov_md": false 00:38:47.286 }, 00:38:47.286 "memory_domains": [ 00:38:47.286 { 00:38:47.286 "dma_device_id": "system", 00:38:47.286 "dma_device_type": 1 00:38:47.286 }, 00:38:47.286 { 00:38:47.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:47.286 "dma_device_type": 2 00:38:47.286 } 00:38:47.286 ], 00:38:47.286 "driver_specific": {} 00:38:47.286 } 00:38:47.286 ] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.286 [2024-12-09 05:29:34.154612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:47.286 [2024-12-09 05:29:34.154696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:47.286 [2024-12-09 05:29:34.154743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:47.286 [2024-12-09 05:29:34.157070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:47.286 [2024-12-09 05:29:34.157137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:47.286 "name": "Existed_Raid", 00:38:47.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.286 "strip_size_kb": 64, 00:38:47.286 "state": "configuring", 00:38:47.286 "raid_level": "concat", 00:38:47.286 "superblock": false, 00:38:47.286 "num_base_bdevs": 4, 00:38:47.286 "num_base_bdevs_discovered": 3, 00:38:47.286 "num_base_bdevs_operational": 4, 00:38:47.286 "base_bdevs_list": [ 00:38:47.286 { 00:38:47.286 "name": "BaseBdev1", 00:38:47.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.286 "is_configured": false, 00:38:47.286 "data_offset": 0, 00:38:47.286 "data_size": 0 00:38:47.286 }, 00:38:47.286 { 00:38:47.286 "name": "BaseBdev2", 00:38:47.286 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:47.286 "is_configured": true, 00:38:47.286 "data_offset": 0, 00:38:47.286 "data_size": 65536 00:38:47.286 }, 00:38:47.286 { 00:38:47.286 "name": "BaseBdev3", 00:38:47.286 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:47.286 "is_configured": true, 00:38:47.286 "data_offset": 0, 00:38:47.286 "data_size": 65536 00:38:47.286 }, 00:38:47.286 { 00:38:47.286 "name": "BaseBdev4", 00:38:47.286 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:47.286 "is_configured": true, 00:38:47.286 "data_offset": 0, 00:38:47.286 "data_size": 65536 00:38:47.286 } 00:38:47.286 ] 00:38:47.286 }' 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:47.286 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 [2024-12-09 05:29:34.694827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:47.852 "name": "Existed_Raid", 00:38:47.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.852 "strip_size_kb": 64, 00:38:47.852 "state": "configuring", 00:38:47.852 "raid_level": "concat", 00:38:47.852 "superblock": false, 00:38:47.852 "num_base_bdevs": 4, 00:38:47.852 "num_base_bdevs_discovered": 2, 00:38:47.852 "num_base_bdevs_operational": 4, 00:38:47.852 "base_bdevs_list": [ 00:38:47.852 { 00:38:47.852 "name": "BaseBdev1", 00:38:47.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.852 "is_configured": false, 00:38:47.852 "data_offset": 0, 00:38:47.852 "data_size": 0 00:38:47.852 }, 00:38:47.852 { 00:38:47.852 "name": null, 00:38:47.852 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:47.852 "is_configured": false, 00:38:47.852 "data_offset": 0, 00:38:47.852 "data_size": 65536 00:38:47.852 }, 00:38:47.852 { 00:38:47.852 "name": "BaseBdev3", 00:38:47.852 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:47.852 "is_configured": true, 00:38:47.852 "data_offset": 0, 00:38:47.852 "data_size": 65536 00:38:47.852 }, 00:38:47.852 { 00:38:47.852 "name": "BaseBdev4", 00:38:47.852 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:47.852 "is_configured": true, 00:38:47.852 "data_offset": 0, 00:38:47.852 "data_size": 65536 00:38:47.852 } 00:38:47.852 ] 00:38:47.852 }' 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:47.852 05:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.416 [2024-12-09 05:29:35.312337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:48.416 BaseBdev1 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.416 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.416 [ 00:38:48.416 { 00:38:48.416 "name": "BaseBdev1", 00:38:48.416 "aliases": [ 00:38:48.416 "01a1ca1e-2053-4d0f-8eda-9091b0c2a384" 00:38:48.416 ], 00:38:48.416 "product_name": "Malloc disk", 00:38:48.416 "block_size": 512, 00:38:48.416 "num_blocks": 65536, 00:38:48.416 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:48.416 "assigned_rate_limits": { 00:38:48.416 "rw_ios_per_sec": 0, 00:38:48.416 "rw_mbytes_per_sec": 0, 00:38:48.417 "r_mbytes_per_sec": 0, 00:38:48.417 "w_mbytes_per_sec": 0 00:38:48.417 }, 00:38:48.417 "claimed": true, 00:38:48.417 "claim_type": "exclusive_write", 00:38:48.417 "zoned": false, 00:38:48.417 "supported_io_types": { 00:38:48.417 "read": true, 00:38:48.417 "write": true, 00:38:48.417 "unmap": true, 00:38:48.417 "flush": true, 00:38:48.417 "reset": true, 00:38:48.417 "nvme_admin": false, 00:38:48.417 "nvme_io": false, 00:38:48.417 "nvme_io_md": false, 00:38:48.417 "write_zeroes": true, 00:38:48.417 "zcopy": true, 00:38:48.417 "get_zone_info": false, 00:38:48.417 "zone_management": false, 00:38:48.417 "zone_append": false, 00:38:48.417 "compare": false, 00:38:48.417 "compare_and_write": false, 00:38:48.417 "abort": true, 00:38:48.417 "seek_hole": false, 00:38:48.417 "seek_data": false, 00:38:48.417 "copy": true, 00:38:48.417 "nvme_iov_md": false 00:38:48.417 }, 00:38:48.417 "memory_domains": [ 00:38:48.417 { 00:38:48.417 "dma_device_id": "system", 00:38:48.417 "dma_device_type": 1 00:38:48.417 }, 00:38:48.417 { 00:38:48.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:48.417 "dma_device_type": 2 00:38:48.417 } 00:38:48.417 ], 00:38:48.417 "driver_specific": {} 00:38:48.417 } 00:38:48.417 ] 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.417 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.674 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:48.674 "name": "Existed_Raid", 00:38:48.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.674 "strip_size_kb": 64, 00:38:48.674 "state": "configuring", 00:38:48.674 "raid_level": "concat", 00:38:48.674 "superblock": false, 00:38:48.674 "num_base_bdevs": 4, 00:38:48.674 "num_base_bdevs_discovered": 3, 00:38:48.674 "num_base_bdevs_operational": 4, 00:38:48.674 "base_bdevs_list": [ 00:38:48.674 { 00:38:48.674 "name": "BaseBdev1", 00:38:48.674 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:48.674 "is_configured": true, 00:38:48.674 "data_offset": 0, 00:38:48.674 "data_size": 65536 00:38:48.674 }, 00:38:48.674 { 00:38:48.674 "name": null, 00:38:48.674 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:48.674 "is_configured": false, 00:38:48.674 "data_offset": 0, 00:38:48.674 "data_size": 65536 00:38:48.674 }, 00:38:48.674 { 00:38:48.674 "name": "BaseBdev3", 00:38:48.674 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:48.674 "is_configured": true, 00:38:48.674 "data_offset": 0, 00:38:48.674 "data_size": 65536 00:38:48.674 }, 00:38:48.674 { 00:38:48.674 "name": "BaseBdev4", 00:38:48.674 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:48.674 "is_configured": true, 00:38:48.674 "data_offset": 0, 00:38:48.674 "data_size": 65536 00:38:48.674 } 00:38:48.674 ] 00:38:48.674 }' 00:38:48.674 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:48.674 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:48.931 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.931 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.931 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:38:48.931 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.189 [2024-12-09 05:29:35.944627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:49.189 05:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.189 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:49.189 "name": "Existed_Raid", 00:38:49.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.189 "strip_size_kb": 64, 00:38:49.189 "state": "configuring", 00:38:49.189 "raid_level": "concat", 00:38:49.189 "superblock": false, 00:38:49.189 "num_base_bdevs": 4, 00:38:49.189 "num_base_bdevs_discovered": 2, 00:38:49.189 "num_base_bdevs_operational": 4, 00:38:49.189 "base_bdevs_list": [ 00:38:49.189 { 00:38:49.189 "name": "BaseBdev1", 00:38:49.189 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:49.189 "is_configured": true, 00:38:49.189 "data_offset": 0, 00:38:49.189 "data_size": 65536 00:38:49.189 }, 00:38:49.189 { 00:38:49.189 "name": null, 00:38:49.189 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:49.189 "is_configured": false, 00:38:49.189 "data_offset": 0, 00:38:49.189 "data_size": 65536 00:38:49.189 }, 00:38:49.189 { 00:38:49.189 "name": null, 00:38:49.189 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:49.189 "is_configured": false, 00:38:49.189 "data_offset": 0, 00:38:49.189 "data_size": 65536 00:38:49.189 }, 00:38:49.189 { 00:38:49.189 "name": "BaseBdev4", 00:38:49.189 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:49.189 "is_configured": true, 00:38:49.189 "data_offset": 0, 00:38:49.189 "data_size": 65536 00:38:49.189 } 00:38:49.189 ] 00:38:49.189 }' 00:38:49.189 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:49.189 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.753 [2024-12-09 05:29:36.536872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:49.753 "name": "Existed_Raid", 00:38:49.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.753 "strip_size_kb": 64, 00:38:49.753 "state": "configuring", 00:38:49.753 "raid_level": "concat", 00:38:49.753 "superblock": false, 00:38:49.753 "num_base_bdevs": 4, 00:38:49.753 "num_base_bdevs_discovered": 3, 00:38:49.753 "num_base_bdevs_operational": 4, 00:38:49.753 "base_bdevs_list": [ 00:38:49.753 { 00:38:49.753 "name": "BaseBdev1", 00:38:49.753 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:49.753 "is_configured": true, 00:38:49.753 "data_offset": 0, 00:38:49.753 "data_size": 65536 00:38:49.753 }, 00:38:49.753 { 00:38:49.753 "name": null, 00:38:49.753 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:49.753 "is_configured": false, 00:38:49.753 "data_offset": 0, 00:38:49.753 "data_size": 65536 00:38:49.753 }, 00:38:49.753 { 00:38:49.753 "name": "BaseBdev3", 00:38:49.753 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:49.753 "is_configured": true, 00:38:49.753 "data_offset": 0, 00:38:49.753 "data_size": 65536 00:38:49.753 }, 00:38:49.753 { 00:38:49.753 "name": "BaseBdev4", 00:38:49.753 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:49.753 "is_configured": true, 00:38:49.753 "data_offset": 0, 00:38:49.753 "data_size": 65536 00:38:49.753 } 00:38:49.753 ] 00:38:49.753 }' 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:49.753 05:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.319 [2024-12-09 05:29:37.117160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:50.319 "name": "Existed_Raid", 00:38:50.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.319 "strip_size_kb": 64, 00:38:50.319 "state": "configuring", 00:38:50.319 "raid_level": "concat", 00:38:50.319 "superblock": false, 00:38:50.319 "num_base_bdevs": 4, 00:38:50.319 "num_base_bdevs_discovered": 2, 00:38:50.319 "num_base_bdevs_operational": 4, 00:38:50.319 "base_bdevs_list": [ 00:38:50.319 { 00:38:50.319 "name": null, 00:38:50.319 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:50.319 "is_configured": false, 00:38:50.319 "data_offset": 0, 00:38:50.319 "data_size": 65536 00:38:50.319 }, 00:38:50.319 { 00:38:50.319 "name": null, 00:38:50.319 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:50.319 "is_configured": false, 00:38:50.319 "data_offset": 0, 00:38:50.319 "data_size": 65536 00:38:50.319 }, 00:38:50.319 { 00:38:50.319 "name": "BaseBdev3", 00:38:50.319 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:50.319 "is_configured": true, 00:38:50.319 "data_offset": 0, 00:38:50.319 "data_size": 65536 00:38:50.319 }, 00:38:50.319 { 00:38:50.319 "name": "BaseBdev4", 00:38:50.319 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:50.319 "is_configured": true, 00:38:50.319 "data_offset": 0, 00:38:50.319 "data_size": 65536 00:38:50.319 } 00:38:50.319 ] 00:38:50.319 }' 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:50.319 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.884 [2024-12-09 05:29:37.786293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:50.884 "name": "Existed_Raid", 00:38:50.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.884 "strip_size_kb": 64, 00:38:50.884 "state": "configuring", 00:38:50.884 "raid_level": "concat", 00:38:50.884 "superblock": false, 00:38:50.884 "num_base_bdevs": 4, 00:38:50.884 "num_base_bdevs_discovered": 3, 00:38:50.884 "num_base_bdevs_operational": 4, 00:38:50.884 "base_bdevs_list": [ 00:38:50.884 { 00:38:50.884 "name": null, 00:38:50.884 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:50.884 "is_configured": false, 00:38:50.884 "data_offset": 0, 00:38:50.884 "data_size": 65536 00:38:50.884 }, 00:38:50.884 { 00:38:50.884 "name": "BaseBdev2", 00:38:50.884 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:50.884 "is_configured": true, 00:38:50.884 "data_offset": 0, 00:38:50.884 "data_size": 65536 00:38:50.884 }, 00:38:50.884 { 00:38:50.884 "name": "BaseBdev3", 00:38:50.884 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:50.884 "is_configured": true, 00:38:50.884 "data_offset": 0, 00:38:50.884 "data_size": 65536 00:38:50.884 }, 00:38:50.884 { 00:38:50.884 "name": "BaseBdev4", 00:38:50.884 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:50.884 "is_configured": true, 00:38:50.884 "data_offset": 0, 00:38:50.884 "data_size": 65536 00:38:50.884 } 00:38:50.884 ] 00:38:50.884 }' 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:50.884 05:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 01a1ca1e-2053-4d0f-8eda-9091b0c2a384 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.487 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.764 [2024-12-09 05:29:38.470283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:38:51.764 [2024-12-09 05:29:38.470555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:38:51.764 [2024-12-09 05:29:38.470578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:38:51.764 [2024-12-09 05:29:38.471002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:38:51.764 [2024-12-09 05:29:38.471220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:38:51.764 [2024-12-09 05:29:38.471240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:38:51.764 [2024-12-09 05:29:38.471553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:51.764 NewBaseBdev 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.764 [ 00:38:51.764 { 00:38:51.764 "name": "NewBaseBdev", 00:38:51.764 "aliases": [ 00:38:51.764 "01a1ca1e-2053-4d0f-8eda-9091b0c2a384" 00:38:51.764 ], 00:38:51.764 "product_name": "Malloc disk", 00:38:51.764 "block_size": 512, 00:38:51.764 "num_blocks": 65536, 00:38:51.764 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:51.764 "assigned_rate_limits": { 00:38:51.764 "rw_ios_per_sec": 0, 00:38:51.764 "rw_mbytes_per_sec": 0, 00:38:51.764 "r_mbytes_per_sec": 0, 00:38:51.764 "w_mbytes_per_sec": 0 00:38:51.764 }, 00:38:51.764 "claimed": true, 00:38:51.764 "claim_type": "exclusive_write", 00:38:51.764 "zoned": false, 00:38:51.764 "supported_io_types": { 00:38:51.764 "read": true, 00:38:51.764 "write": true, 00:38:51.764 "unmap": true, 00:38:51.764 "flush": true, 00:38:51.764 "reset": true, 00:38:51.764 "nvme_admin": false, 00:38:51.764 "nvme_io": false, 00:38:51.764 "nvme_io_md": false, 00:38:51.764 "write_zeroes": true, 00:38:51.764 "zcopy": true, 00:38:51.764 "get_zone_info": false, 00:38:51.764 "zone_management": false, 00:38:51.764 "zone_append": false, 00:38:51.764 "compare": false, 00:38:51.764 "compare_and_write": false, 00:38:51.764 "abort": true, 00:38:51.764 "seek_hole": false, 00:38:51.764 "seek_data": false, 00:38:51.764 "copy": true, 00:38:51.764 "nvme_iov_md": false 00:38:51.764 }, 00:38:51.764 "memory_domains": [ 00:38:51.764 { 00:38:51.764 "dma_device_id": "system", 00:38:51.764 "dma_device_type": 1 00:38:51.764 }, 00:38:51.764 { 00:38:51.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:51.764 "dma_device_type": 2 00:38:51.764 } 00:38:51.764 ], 00:38:51.764 "driver_specific": {} 00:38:51.764 } 00:38:51.764 ] 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:51.764 "name": "Existed_Raid", 00:38:51.764 "uuid": "8d4d806e-3236-4401-8749-b430c0f7b72d", 00:38:51.764 "strip_size_kb": 64, 00:38:51.764 "state": "online", 00:38:51.764 "raid_level": "concat", 00:38:51.764 "superblock": false, 00:38:51.764 "num_base_bdevs": 4, 00:38:51.764 "num_base_bdevs_discovered": 4, 00:38:51.764 "num_base_bdevs_operational": 4, 00:38:51.764 "base_bdevs_list": [ 00:38:51.764 { 00:38:51.764 "name": "NewBaseBdev", 00:38:51.764 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:51.764 "is_configured": true, 00:38:51.764 "data_offset": 0, 00:38:51.764 "data_size": 65536 00:38:51.764 }, 00:38:51.764 { 00:38:51.764 "name": "BaseBdev2", 00:38:51.764 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:51.764 "is_configured": true, 00:38:51.764 "data_offset": 0, 00:38:51.764 "data_size": 65536 00:38:51.764 }, 00:38:51.764 { 00:38:51.764 "name": "BaseBdev3", 00:38:51.764 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:51.764 "is_configured": true, 00:38:51.764 "data_offset": 0, 00:38:51.764 "data_size": 65536 00:38:51.764 }, 00:38:51.764 { 00:38:51.764 "name": "BaseBdev4", 00:38:51.764 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:51.764 "is_configured": true, 00:38:51.764 "data_offset": 0, 00:38:51.764 "data_size": 65536 00:38:51.764 } 00:38:51.764 ] 00:38:51.764 }' 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:51.764 05:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:52.331 [2024-12-09 05:29:39.022964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.331 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:52.331 "name": "Existed_Raid", 00:38:52.331 "aliases": [ 00:38:52.331 "8d4d806e-3236-4401-8749-b430c0f7b72d" 00:38:52.331 ], 00:38:52.331 "product_name": "Raid Volume", 00:38:52.331 "block_size": 512, 00:38:52.331 "num_blocks": 262144, 00:38:52.331 "uuid": "8d4d806e-3236-4401-8749-b430c0f7b72d", 00:38:52.331 "assigned_rate_limits": { 00:38:52.331 "rw_ios_per_sec": 0, 00:38:52.331 "rw_mbytes_per_sec": 0, 00:38:52.331 "r_mbytes_per_sec": 0, 00:38:52.331 "w_mbytes_per_sec": 0 00:38:52.331 }, 00:38:52.331 "claimed": false, 00:38:52.331 "zoned": false, 00:38:52.331 "supported_io_types": { 00:38:52.331 "read": true, 00:38:52.331 "write": true, 00:38:52.331 "unmap": true, 00:38:52.331 "flush": true, 00:38:52.331 "reset": true, 00:38:52.331 "nvme_admin": false, 00:38:52.331 "nvme_io": false, 00:38:52.331 "nvme_io_md": false, 00:38:52.331 "write_zeroes": true, 00:38:52.331 "zcopy": false, 00:38:52.331 "get_zone_info": false, 00:38:52.331 "zone_management": false, 00:38:52.331 "zone_append": false, 00:38:52.331 "compare": false, 00:38:52.331 "compare_and_write": false, 00:38:52.331 "abort": false, 00:38:52.331 "seek_hole": false, 00:38:52.331 "seek_data": false, 00:38:52.331 "copy": false, 00:38:52.331 "nvme_iov_md": false 00:38:52.331 }, 00:38:52.331 "memory_domains": [ 00:38:52.331 { 00:38:52.331 "dma_device_id": "system", 00:38:52.331 "dma_device_type": 1 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.331 "dma_device_type": 2 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "system", 00:38:52.331 "dma_device_type": 1 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.331 "dma_device_type": 2 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "system", 00:38:52.331 "dma_device_type": 1 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.331 "dma_device_type": 2 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "system", 00:38:52.331 "dma_device_type": 1 00:38:52.331 }, 00:38:52.331 { 00:38:52.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.331 "dma_device_type": 2 00:38:52.331 } 00:38:52.331 ], 00:38:52.331 "driver_specific": { 00:38:52.331 "raid": { 00:38:52.331 "uuid": "8d4d806e-3236-4401-8749-b430c0f7b72d", 00:38:52.331 "strip_size_kb": 64, 00:38:52.331 "state": "online", 00:38:52.331 "raid_level": "concat", 00:38:52.331 "superblock": false, 00:38:52.331 "num_base_bdevs": 4, 00:38:52.331 "num_base_bdevs_discovered": 4, 00:38:52.331 "num_base_bdevs_operational": 4, 00:38:52.331 "base_bdevs_list": [ 00:38:52.331 { 00:38:52.331 "name": "NewBaseBdev", 00:38:52.331 "uuid": "01a1ca1e-2053-4d0f-8eda-9091b0c2a384", 00:38:52.331 "is_configured": true, 00:38:52.331 "data_offset": 0, 00:38:52.332 "data_size": 65536 00:38:52.332 }, 00:38:52.332 { 00:38:52.332 "name": "BaseBdev2", 00:38:52.332 "uuid": "a21a57d9-7961-43a0-9ab7-918cdcd9b5be", 00:38:52.332 "is_configured": true, 00:38:52.332 "data_offset": 0, 00:38:52.332 "data_size": 65536 00:38:52.332 }, 00:38:52.332 { 00:38:52.332 "name": "BaseBdev3", 00:38:52.332 "uuid": "727f2d85-6723-4199-98e4-b76d72e27a2d", 00:38:52.332 "is_configured": true, 00:38:52.332 "data_offset": 0, 00:38:52.332 "data_size": 65536 00:38:52.332 }, 00:38:52.332 { 00:38:52.332 "name": "BaseBdev4", 00:38:52.332 "uuid": "f3844853-6e7f-4f20-9232-d7eff9ff65ff", 00:38:52.332 "is_configured": true, 00:38:52.332 "data_offset": 0, 00:38:52.332 "data_size": 65536 00:38:52.332 } 00:38:52.332 ] 00:38:52.332 } 00:38:52.332 } 00:38:52.332 }' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:38:52.332 BaseBdev2 00:38:52.332 BaseBdev3 00:38:52.332 BaseBdev4' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:52.332 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.590 [2024-12-09 05:29:39.394684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:52.590 [2024-12-09 05:29:39.394918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:52.590 [2024-12-09 05:29:39.395113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:52.590 [2024-12-09 05:29:39.395442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:52.590 [2024-12-09 05:29:39.395468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71458 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71458 ']' 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71458 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71458 00:38:52.590 killing process with pid 71458 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.590 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.591 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71458' 00:38:52.591 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71458 00:38:52.591 [2024-12-09 05:29:39.433685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:52.591 05:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71458 00:38:52.849 [2024-12-09 05:29:39.787191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:38:54.224 00:38:54.224 real 0m12.892s 00:38:54.224 user 0m21.310s 00:38:54.224 sys 0m1.885s 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:38:54.224 ************************************ 00:38:54.224 END TEST raid_state_function_test 00:38:54.224 ************************************ 00:38:54.224 05:29:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:38:54.224 05:29:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:54.224 05:29:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.224 05:29:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:54.224 ************************************ 00:38:54.224 START TEST raid_state_function_test_sb 00:38:54.224 ************************************ 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:38:54.224 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:38:54.225 Process raid pid: 72147 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72147 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72147' 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72147 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72147 ']' 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.225 05:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:54.225 [2024-12-09 05:29:41.060777] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:38:54.225 [2024-12-09 05:29:41.061257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.484 [2024-12-09 05:29:41.236332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.484 [2024-12-09 05:29:41.380127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.743 [2024-12-09 05:29:41.604275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:54.743 [2024-12-09 05:29:41.604321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.310 [2024-12-09 05:29:42.025718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:55.310 [2024-12-09 05:29:42.025995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:55.310 [2024-12-09 05:29:42.026124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:55.310 [2024-12-09 05:29:42.026278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:55.310 [2024-12-09 05:29:42.026388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:55.310 [2024-12-09 05:29:42.026523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:55.310 [2024-12-09 05:29:42.026644] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:55.310 [2024-12-09 05:29:42.026798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:55.310 "name": "Existed_Raid", 00:38:55.310 "uuid": "d7700735-753d-4c9c-b404-de8d6633ed2b", 00:38:55.310 "strip_size_kb": 64, 00:38:55.310 "state": "configuring", 00:38:55.310 "raid_level": "concat", 00:38:55.310 "superblock": true, 00:38:55.310 "num_base_bdevs": 4, 00:38:55.310 "num_base_bdevs_discovered": 0, 00:38:55.310 "num_base_bdevs_operational": 4, 00:38:55.310 "base_bdevs_list": [ 00:38:55.310 { 00:38:55.310 "name": "BaseBdev1", 00:38:55.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.310 "is_configured": false, 00:38:55.310 "data_offset": 0, 00:38:55.310 "data_size": 0 00:38:55.310 }, 00:38:55.310 { 00:38:55.310 "name": "BaseBdev2", 00:38:55.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.310 "is_configured": false, 00:38:55.310 "data_offset": 0, 00:38:55.310 "data_size": 0 00:38:55.310 }, 00:38:55.310 { 00:38:55.310 "name": "BaseBdev3", 00:38:55.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.310 "is_configured": false, 00:38:55.310 "data_offset": 0, 00:38:55.310 "data_size": 0 00:38:55.310 }, 00:38:55.310 { 00:38:55.310 "name": "BaseBdev4", 00:38:55.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.310 "is_configured": false, 00:38:55.310 "data_offset": 0, 00:38:55.310 "data_size": 0 00:38:55.310 } 00:38:55.310 ] 00:38:55.310 }' 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:55.310 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.892 [2024-12-09 05:29:42.557824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:55.892 [2024-12-09 05:29:42.557897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.892 [2024-12-09 05:29:42.565827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:55.892 [2024-12-09 05:29:42.566079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:55.892 [2024-12-09 05:29:42.566247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:55.892 [2024-12-09 05:29:42.566318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:55.892 [2024-12-09 05:29:42.566420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:55.892 [2024-12-09 05:29:42.566452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:55.892 [2024-12-09 05:29:42.566464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:55.892 [2024-12-09 05:29:42.566479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.892 [2024-12-09 05:29:42.613860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:55.892 BaseBdev1 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.892 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.892 [ 00:38:55.892 { 00:38:55.892 "name": "BaseBdev1", 00:38:55.892 "aliases": [ 00:38:55.892 "9fd42da5-7328-4673-ae40-0ead0a661a34" 00:38:55.892 ], 00:38:55.892 "product_name": "Malloc disk", 00:38:55.892 "block_size": 512, 00:38:55.892 "num_blocks": 65536, 00:38:55.892 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:55.892 "assigned_rate_limits": { 00:38:55.892 "rw_ios_per_sec": 0, 00:38:55.892 "rw_mbytes_per_sec": 0, 00:38:55.892 "r_mbytes_per_sec": 0, 00:38:55.892 "w_mbytes_per_sec": 0 00:38:55.892 }, 00:38:55.892 "claimed": true, 00:38:55.892 "claim_type": "exclusive_write", 00:38:55.892 "zoned": false, 00:38:55.892 "supported_io_types": { 00:38:55.892 "read": true, 00:38:55.892 "write": true, 00:38:55.892 "unmap": true, 00:38:55.892 "flush": true, 00:38:55.892 "reset": true, 00:38:55.892 "nvme_admin": false, 00:38:55.892 "nvme_io": false, 00:38:55.892 "nvme_io_md": false, 00:38:55.892 "write_zeroes": true, 00:38:55.892 "zcopy": true, 00:38:55.892 "get_zone_info": false, 00:38:55.892 "zone_management": false, 00:38:55.892 "zone_append": false, 00:38:55.892 "compare": false, 00:38:55.892 "compare_and_write": false, 00:38:55.892 "abort": true, 00:38:55.892 "seek_hole": false, 00:38:55.892 "seek_data": false, 00:38:55.892 "copy": true, 00:38:55.892 "nvme_iov_md": false 00:38:55.892 }, 00:38:55.892 "memory_domains": [ 00:38:55.893 { 00:38:55.893 "dma_device_id": "system", 00:38:55.893 "dma_device_type": 1 00:38:55.893 }, 00:38:55.893 { 00:38:55.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:55.893 "dma_device_type": 2 00:38:55.893 } 00:38:55.893 ], 00:38:55.893 "driver_specific": {} 00:38:55.893 } 00:38:55.893 ] 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:55.893 "name": "Existed_Raid", 00:38:55.893 "uuid": "cdcecb8f-cab8-4a27-a2a0-13a5d9d2f044", 00:38:55.893 "strip_size_kb": 64, 00:38:55.893 "state": "configuring", 00:38:55.893 "raid_level": "concat", 00:38:55.893 "superblock": true, 00:38:55.893 "num_base_bdevs": 4, 00:38:55.893 "num_base_bdevs_discovered": 1, 00:38:55.893 "num_base_bdevs_operational": 4, 00:38:55.893 "base_bdevs_list": [ 00:38:55.893 { 00:38:55.893 "name": "BaseBdev1", 00:38:55.893 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:55.893 "is_configured": true, 00:38:55.893 "data_offset": 2048, 00:38:55.893 "data_size": 63488 00:38:55.893 }, 00:38:55.893 { 00:38:55.893 "name": "BaseBdev2", 00:38:55.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.893 "is_configured": false, 00:38:55.893 "data_offset": 0, 00:38:55.893 "data_size": 0 00:38:55.893 }, 00:38:55.893 { 00:38:55.893 "name": "BaseBdev3", 00:38:55.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.893 "is_configured": false, 00:38:55.893 "data_offset": 0, 00:38:55.893 "data_size": 0 00:38:55.893 }, 00:38:55.893 { 00:38:55.893 "name": "BaseBdev4", 00:38:55.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.893 "is_configured": false, 00:38:55.893 "data_offset": 0, 00:38:55.893 "data_size": 0 00:38:55.893 } 00:38:55.893 ] 00:38:55.893 }' 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:55.893 05:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.459 [2024-12-09 05:29:43.130113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:56.459 [2024-12-09 05:29:43.130242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.459 [2024-12-09 05:29:43.138177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:56.459 [2024-12-09 05:29:43.140959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:56.459 [2024-12-09 05:29:43.141176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:56.459 [2024-12-09 05:29:43.141315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:56.459 [2024-12-09 05:29:43.141351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:56.459 [2024-12-09 05:29:43.141364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:56.459 [2024-12-09 05:29:43.141379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:56.459 "name": "Existed_Raid", 00:38:56.459 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:56.459 "strip_size_kb": 64, 00:38:56.459 "state": "configuring", 00:38:56.459 "raid_level": "concat", 00:38:56.459 "superblock": true, 00:38:56.459 "num_base_bdevs": 4, 00:38:56.459 "num_base_bdevs_discovered": 1, 00:38:56.459 "num_base_bdevs_operational": 4, 00:38:56.459 "base_bdevs_list": [ 00:38:56.459 { 00:38:56.459 "name": "BaseBdev1", 00:38:56.459 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:56.459 "is_configured": true, 00:38:56.459 "data_offset": 2048, 00:38:56.459 "data_size": 63488 00:38:56.459 }, 00:38:56.459 { 00:38:56.459 "name": "BaseBdev2", 00:38:56.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.459 "is_configured": false, 00:38:56.459 "data_offset": 0, 00:38:56.459 "data_size": 0 00:38:56.459 }, 00:38:56.459 { 00:38:56.459 "name": "BaseBdev3", 00:38:56.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.459 "is_configured": false, 00:38:56.459 "data_offset": 0, 00:38:56.459 "data_size": 0 00:38:56.459 }, 00:38:56.459 { 00:38:56.459 "name": "BaseBdev4", 00:38:56.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.459 "is_configured": false, 00:38:56.459 "data_offset": 0, 00:38:56.459 "data_size": 0 00:38:56.459 } 00:38:56.459 ] 00:38:56.459 }' 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:56.459 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.716 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:38:56.716 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.716 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.974 BaseBdev2 00:38:56.974 [2024-12-09 05:29:43.697329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.974 [ 00:38:56.974 { 00:38:56.974 "name": "BaseBdev2", 00:38:56.974 "aliases": [ 00:38:56.974 "380849a8-08e4-4cdb-a6b4-333e3ddf77f9" 00:38:56.974 ], 00:38:56.974 "product_name": "Malloc disk", 00:38:56.974 "block_size": 512, 00:38:56.974 "num_blocks": 65536, 00:38:56.974 "uuid": "380849a8-08e4-4cdb-a6b4-333e3ddf77f9", 00:38:56.974 "assigned_rate_limits": { 00:38:56.974 "rw_ios_per_sec": 0, 00:38:56.974 "rw_mbytes_per_sec": 0, 00:38:56.974 "r_mbytes_per_sec": 0, 00:38:56.974 "w_mbytes_per_sec": 0 00:38:56.974 }, 00:38:56.974 "claimed": true, 00:38:56.974 "claim_type": "exclusive_write", 00:38:56.974 "zoned": false, 00:38:56.974 "supported_io_types": { 00:38:56.974 "read": true, 00:38:56.974 "write": true, 00:38:56.974 "unmap": true, 00:38:56.974 "flush": true, 00:38:56.974 "reset": true, 00:38:56.974 "nvme_admin": false, 00:38:56.974 "nvme_io": false, 00:38:56.974 "nvme_io_md": false, 00:38:56.974 "write_zeroes": true, 00:38:56.974 "zcopy": true, 00:38:56.974 "get_zone_info": false, 00:38:56.974 "zone_management": false, 00:38:56.974 "zone_append": false, 00:38:56.974 "compare": false, 00:38:56.974 "compare_and_write": false, 00:38:56.974 "abort": true, 00:38:56.974 "seek_hole": false, 00:38:56.974 "seek_data": false, 00:38:56.974 "copy": true, 00:38:56.974 "nvme_iov_md": false 00:38:56.974 }, 00:38:56.974 "memory_domains": [ 00:38:56.974 { 00:38:56.974 "dma_device_id": "system", 00:38:56.974 "dma_device_type": 1 00:38:56.974 }, 00:38:56.974 { 00:38:56.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:56.974 "dma_device_type": 2 00:38:56.974 } 00:38:56.974 ], 00:38:56.974 "driver_specific": {} 00:38:56.974 } 00:38:56.974 ] 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:56.974 "name": "Existed_Raid", 00:38:56.974 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:56.974 "strip_size_kb": 64, 00:38:56.974 "state": "configuring", 00:38:56.974 "raid_level": "concat", 00:38:56.974 "superblock": true, 00:38:56.974 "num_base_bdevs": 4, 00:38:56.974 "num_base_bdevs_discovered": 2, 00:38:56.974 "num_base_bdevs_operational": 4, 00:38:56.974 "base_bdevs_list": [ 00:38:56.974 { 00:38:56.974 "name": "BaseBdev1", 00:38:56.974 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:56.974 "is_configured": true, 00:38:56.974 "data_offset": 2048, 00:38:56.974 "data_size": 63488 00:38:56.974 }, 00:38:56.974 { 00:38:56.974 "name": "BaseBdev2", 00:38:56.974 "uuid": "380849a8-08e4-4cdb-a6b4-333e3ddf77f9", 00:38:56.974 "is_configured": true, 00:38:56.974 "data_offset": 2048, 00:38:56.974 "data_size": 63488 00:38:56.974 }, 00:38:56.974 { 00:38:56.974 "name": "BaseBdev3", 00:38:56.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.974 "is_configured": false, 00:38:56.974 "data_offset": 0, 00:38:56.974 "data_size": 0 00:38:56.974 }, 00:38:56.974 { 00:38:56.974 "name": "BaseBdev4", 00:38:56.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.974 "is_configured": false, 00:38:56.974 "data_offset": 0, 00:38:56.974 "data_size": 0 00:38:56.974 } 00:38:56.974 ] 00:38:56.974 }' 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:56.974 05:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:57.539 BaseBdev3 00:38:57.539 [2024-12-09 05:29:44.308133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:57.539 [ 00:38:57.539 { 00:38:57.539 "name": "BaseBdev3", 00:38:57.539 "aliases": [ 00:38:57.539 "ce5a0ac8-7125-432e-b50d-1e1259c38783" 00:38:57.539 ], 00:38:57.539 "product_name": "Malloc disk", 00:38:57.539 "block_size": 512, 00:38:57.539 "num_blocks": 65536, 00:38:57.539 "uuid": "ce5a0ac8-7125-432e-b50d-1e1259c38783", 00:38:57.539 "assigned_rate_limits": { 00:38:57.539 "rw_ios_per_sec": 0, 00:38:57.539 "rw_mbytes_per_sec": 0, 00:38:57.539 "r_mbytes_per_sec": 0, 00:38:57.539 "w_mbytes_per_sec": 0 00:38:57.539 }, 00:38:57.539 "claimed": true, 00:38:57.539 "claim_type": "exclusive_write", 00:38:57.539 "zoned": false, 00:38:57.539 "supported_io_types": { 00:38:57.539 "read": true, 00:38:57.539 "write": true, 00:38:57.539 "unmap": true, 00:38:57.539 "flush": true, 00:38:57.539 "reset": true, 00:38:57.539 "nvme_admin": false, 00:38:57.539 "nvme_io": false, 00:38:57.539 "nvme_io_md": false, 00:38:57.539 "write_zeroes": true, 00:38:57.539 "zcopy": true, 00:38:57.539 "get_zone_info": false, 00:38:57.539 "zone_management": false, 00:38:57.539 "zone_append": false, 00:38:57.539 "compare": false, 00:38:57.539 "compare_and_write": false, 00:38:57.539 "abort": true, 00:38:57.539 "seek_hole": false, 00:38:57.539 "seek_data": false, 00:38:57.539 "copy": true, 00:38:57.539 "nvme_iov_md": false 00:38:57.539 }, 00:38:57.539 "memory_domains": [ 00:38:57.539 { 00:38:57.539 "dma_device_id": "system", 00:38:57.539 "dma_device_type": 1 00:38:57.539 }, 00:38:57.539 { 00:38:57.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:57.539 "dma_device_type": 2 00:38:57.539 } 00:38:57.539 ], 00:38:57.539 "driver_specific": {} 00:38:57.539 } 00:38:57.539 ] 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:57.539 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:57.540 "name": "Existed_Raid", 00:38:57.540 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:57.540 "strip_size_kb": 64, 00:38:57.540 "state": "configuring", 00:38:57.540 "raid_level": "concat", 00:38:57.540 "superblock": true, 00:38:57.540 "num_base_bdevs": 4, 00:38:57.540 "num_base_bdevs_discovered": 3, 00:38:57.540 "num_base_bdevs_operational": 4, 00:38:57.540 "base_bdevs_list": [ 00:38:57.540 { 00:38:57.540 "name": "BaseBdev1", 00:38:57.540 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:57.540 "is_configured": true, 00:38:57.540 "data_offset": 2048, 00:38:57.540 "data_size": 63488 00:38:57.540 }, 00:38:57.540 { 00:38:57.540 "name": "BaseBdev2", 00:38:57.540 "uuid": "380849a8-08e4-4cdb-a6b4-333e3ddf77f9", 00:38:57.540 "is_configured": true, 00:38:57.540 "data_offset": 2048, 00:38:57.540 "data_size": 63488 00:38:57.540 }, 00:38:57.540 { 00:38:57.540 "name": "BaseBdev3", 00:38:57.540 "uuid": "ce5a0ac8-7125-432e-b50d-1e1259c38783", 00:38:57.540 "is_configured": true, 00:38:57.540 "data_offset": 2048, 00:38:57.540 "data_size": 63488 00:38:57.540 }, 00:38:57.540 { 00:38:57.540 "name": "BaseBdev4", 00:38:57.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.540 "is_configured": false, 00:38:57.540 "data_offset": 0, 00:38:57.540 "data_size": 0 00:38:57.540 } 00:38:57.540 ] 00:38:57.540 }' 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:57.540 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.104 [2024-12-09 05:29:44.901203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:58.104 [2024-12-09 05:29:44.901565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:58.104 [2024-12-09 05:29:44.901585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:58.104 BaseBdev4 00:38:58.104 [2024-12-09 05:29:44.901961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:58.104 [2024-12-09 05:29:44.902253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:58.104 [2024-12-09 05:29:44.902276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:38:58.104 [2024-12-09 05:29:44.902458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.104 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.105 [ 00:38:58.105 { 00:38:58.105 "name": "BaseBdev4", 00:38:58.105 "aliases": [ 00:38:58.105 "e38872b3-048d-4ad1-b67c-dc4f0a16cf42" 00:38:58.105 ], 00:38:58.105 "product_name": "Malloc disk", 00:38:58.105 "block_size": 512, 00:38:58.105 "num_blocks": 65536, 00:38:58.105 "uuid": "e38872b3-048d-4ad1-b67c-dc4f0a16cf42", 00:38:58.105 "assigned_rate_limits": { 00:38:58.105 "rw_ios_per_sec": 0, 00:38:58.105 "rw_mbytes_per_sec": 0, 00:38:58.105 "r_mbytes_per_sec": 0, 00:38:58.105 "w_mbytes_per_sec": 0 00:38:58.105 }, 00:38:58.105 "claimed": true, 00:38:58.105 "claim_type": "exclusive_write", 00:38:58.105 "zoned": false, 00:38:58.105 "supported_io_types": { 00:38:58.105 "read": true, 00:38:58.105 "write": true, 00:38:58.105 "unmap": true, 00:38:58.105 "flush": true, 00:38:58.105 "reset": true, 00:38:58.105 "nvme_admin": false, 00:38:58.105 "nvme_io": false, 00:38:58.105 "nvme_io_md": false, 00:38:58.105 "write_zeroes": true, 00:38:58.105 "zcopy": true, 00:38:58.105 "get_zone_info": false, 00:38:58.105 "zone_management": false, 00:38:58.105 "zone_append": false, 00:38:58.105 "compare": false, 00:38:58.105 "compare_and_write": false, 00:38:58.105 "abort": true, 00:38:58.105 "seek_hole": false, 00:38:58.105 "seek_data": false, 00:38:58.105 "copy": true, 00:38:58.105 "nvme_iov_md": false 00:38:58.105 }, 00:38:58.105 "memory_domains": [ 00:38:58.105 { 00:38:58.105 "dma_device_id": "system", 00:38:58.105 "dma_device_type": 1 00:38:58.105 }, 00:38:58.105 { 00:38:58.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.105 "dma_device_type": 2 00:38:58.105 } 00:38:58.105 ], 00:38:58.105 "driver_specific": {} 00:38:58.105 } 00:38:58.105 ] 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:58.105 "name": "Existed_Raid", 00:38:58.105 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:58.105 "strip_size_kb": 64, 00:38:58.105 "state": "online", 00:38:58.105 "raid_level": "concat", 00:38:58.105 "superblock": true, 00:38:58.105 "num_base_bdevs": 4, 00:38:58.105 "num_base_bdevs_discovered": 4, 00:38:58.105 "num_base_bdevs_operational": 4, 00:38:58.105 "base_bdevs_list": [ 00:38:58.105 { 00:38:58.105 "name": "BaseBdev1", 00:38:58.105 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:58.105 "is_configured": true, 00:38:58.105 "data_offset": 2048, 00:38:58.105 "data_size": 63488 00:38:58.105 }, 00:38:58.105 { 00:38:58.105 "name": "BaseBdev2", 00:38:58.105 "uuid": "380849a8-08e4-4cdb-a6b4-333e3ddf77f9", 00:38:58.105 "is_configured": true, 00:38:58.105 "data_offset": 2048, 00:38:58.105 "data_size": 63488 00:38:58.105 }, 00:38:58.105 { 00:38:58.105 "name": "BaseBdev3", 00:38:58.105 "uuid": "ce5a0ac8-7125-432e-b50d-1e1259c38783", 00:38:58.105 "is_configured": true, 00:38:58.105 "data_offset": 2048, 00:38:58.105 "data_size": 63488 00:38:58.105 }, 00:38:58.105 { 00:38:58.105 "name": "BaseBdev4", 00:38:58.105 "uuid": "e38872b3-048d-4ad1-b67c-dc4f0a16cf42", 00:38:58.105 "is_configured": true, 00:38:58.105 "data_offset": 2048, 00:38:58.105 "data_size": 63488 00:38:58.105 } 00:38:58.105 ] 00:38:58.105 }' 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:58.105 05:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.676 [2024-12-09 05:29:45.465884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.676 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:58.676 "name": "Existed_Raid", 00:38:58.676 "aliases": [ 00:38:58.676 "8d2baff3-0238-4334-853d-81f52df00d16" 00:38:58.676 ], 00:38:58.676 "product_name": "Raid Volume", 00:38:58.676 "block_size": 512, 00:38:58.676 "num_blocks": 253952, 00:38:58.676 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:58.676 "assigned_rate_limits": { 00:38:58.676 "rw_ios_per_sec": 0, 00:38:58.676 "rw_mbytes_per_sec": 0, 00:38:58.676 "r_mbytes_per_sec": 0, 00:38:58.676 "w_mbytes_per_sec": 0 00:38:58.676 }, 00:38:58.676 "claimed": false, 00:38:58.676 "zoned": false, 00:38:58.676 "supported_io_types": { 00:38:58.676 "read": true, 00:38:58.676 "write": true, 00:38:58.676 "unmap": true, 00:38:58.676 "flush": true, 00:38:58.676 "reset": true, 00:38:58.676 "nvme_admin": false, 00:38:58.676 "nvme_io": false, 00:38:58.676 "nvme_io_md": false, 00:38:58.676 "write_zeroes": true, 00:38:58.676 "zcopy": false, 00:38:58.676 "get_zone_info": false, 00:38:58.676 "zone_management": false, 00:38:58.676 "zone_append": false, 00:38:58.676 "compare": false, 00:38:58.676 "compare_and_write": false, 00:38:58.676 "abort": false, 00:38:58.676 "seek_hole": false, 00:38:58.676 "seek_data": false, 00:38:58.676 "copy": false, 00:38:58.676 "nvme_iov_md": false 00:38:58.677 }, 00:38:58.677 "memory_domains": [ 00:38:58.677 { 00:38:58.677 "dma_device_id": "system", 00:38:58.677 "dma_device_type": 1 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.677 "dma_device_type": 2 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "system", 00:38:58.677 "dma_device_type": 1 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.677 "dma_device_type": 2 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "system", 00:38:58.677 "dma_device_type": 1 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.677 "dma_device_type": 2 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "system", 00:38:58.677 "dma_device_type": 1 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.677 "dma_device_type": 2 00:38:58.677 } 00:38:58.677 ], 00:38:58.677 "driver_specific": { 00:38:58.677 "raid": { 00:38:58.677 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:58.677 "strip_size_kb": 64, 00:38:58.677 "state": "online", 00:38:58.677 "raid_level": "concat", 00:38:58.677 "superblock": true, 00:38:58.677 "num_base_bdevs": 4, 00:38:58.677 "num_base_bdevs_discovered": 4, 00:38:58.677 "num_base_bdevs_operational": 4, 00:38:58.677 "base_bdevs_list": [ 00:38:58.677 { 00:38:58.677 "name": "BaseBdev1", 00:38:58.677 "uuid": "9fd42da5-7328-4673-ae40-0ead0a661a34", 00:38:58.677 "is_configured": true, 00:38:58.677 "data_offset": 2048, 00:38:58.677 "data_size": 63488 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "name": "BaseBdev2", 00:38:58.677 "uuid": "380849a8-08e4-4cdb-a6b4-333e3ddf77f9", 00:38:58.677 "is_configured": true, 00:38:58.677 "data_offset": 2048, 00:38:58.677 "data_size": 63488 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "name": "BaseBdev3", 00:38:58.677 "uuid": "ce5a0ac8-7125-432e-b50d-1e1259c38783", 00:38:58.677 "is_configured": true, 00:38:58.677 "data_offset": 2048, 00:38:58.677 "data_size": 63488 00:38:58.677 }, 00:38:58.677 { 00:38:58.677 "name": "BaseBdev4", 00:38:58.677 "uuid": "e38872b3-048d-4ad1-b67c-dc4f0a16cf42", 00:38:58.677 "is_configured": true, 00:38:58.677 "data_offset": 2048, 00:38:58.677 "data_size": 63488 00:38:58.677 } 00:38:58.677 ] 00:38:58.677 } 00:38:58.677 } 00:38:58.677 }' 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:58.677 BaseBdev2 00:38:58.677 BaseBdev3 00:38:58.677 BaseBdev4' 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.677 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.936 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.936 [2024-12-09 05:29:45.841558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:58.936 [2024-12-09 05:29:45.841762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:58.936 [2024-12-09 05:29:45.841980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:59.195 "name": "Existed_Raid", 00:38:59.195 "uuid": "8d2baff3-0238-4334-853d-81f52df00d16", 00:38:59.195 "strip_size_kb": 64, 00:38:59.195 "state": "offline", 00:38:59.195 "raid_level": "concat", 00:38:59.195 "superblock": true, 00:38:59.195 "num_base_bdevs": 4, 00:38:59.195 "num_base_bdevs_discovered": 3, 00:38:59.195 "num_base_bdevs_operational": 3, 00:38:59.195 "base_bdevs_list": [ 00:38:59.195 { 00:38:59.195 "name": null, 00:38:59.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:59.195 "is_configured": false, 00:38:59.195 "data_offset": 0, 00:38:59.195 "data_size": 63488 00:38:59.195 }, 00:38:59.195 { 00:38:59.195 "name": "BaseBdev2", 00:38:59.195 "uuid": "380849a8-08e4-4cdb-a6b4-333e3ddf77f9", 00:38:59.195 "is_configured": true, 00:38:59.195 "data_offset": 2048, 00:38:59.195 "data_size": 63488 00:38:59.195 }, 00:38:59.195 { 00:38:59.195 "name": "BaseBdev3", 00:38:59.195 "uuid": "ce5a0ac8-7125-432e-b50d-1e1259c38783", 00:38:59.195 "is_configured": true, 00:38:59.195 "data_offset": 2048, 00:38:59.195 "data_size": 63488 00:38:59.195 }, 00:38:59.195 { 00:38:59.195 "name": "BaseBdev4", 00:38:59.195 "uuid": "e38872b3-048d-4ad1-b67c-dc4f0a16cf42", 00:38:59.195 "is_configured": true, 00:38:59.195 "data_offset": 2048, 00:38:59.195 "data_size": 63488 00:38:59.195 } 00:38:59.195 ] 00:38:59.195 }' 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:59.195 05:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:59.762 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:59.762 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:59.762 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:59.763 [2024-12-09 05:29:46.517349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.763 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:59.763 [2024-12-09 05:29:46.660414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:00.021 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 [2024-12-09 05:29:46.801711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:39:00.022 [2024-12-09 05:29:46.801991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 BaseBdev2 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.022 05:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.281 [ 00:39:00.281 { 00:39:00.281 "name": "BaseBdev2", 00:39:00.281 "aliases": [ 00:39:00.281 "466b9394-5424-48dc-8ec0-69d464cfde6c" 00:39:00.281 ], 00:39:00.281 "product_name": "Malloc disk", 00:39:00.281 "block_size": 512, 00:39:00.281 "num_blocks": 65536, 00:39:00.281 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:00.281 "assigned_rate_limits": { 00:39:00.281 "rw_ios_per_sec": 0, 00:39:00.281 "rw_mbytes_per_sec": 0, 00:39:00.281 "r_mbytes_per_sec": 0, 00:39:00.281 "w_mbytes_per_sec": 0 00:39:00.281 }, 00:39:00.281 "claimed": false, 00:39:00.281 "zoned": false, 00:39:00.281 "supported_io_types": { 00:39:00.281 "read": true, 00:39:00.281 "write": true, 00:39:00.281 "unmap": true, 00:39:00.281 "flush": true, 00:39:00.281 "reset": true, 00:39:00.281 "nvme_admin": false, 00:39:00.281 "nvme_io": false, 00:39:00.281 "nvme_io_md": false, 00:39:00.281 "write_zeroes": true, 00:39:00.281 "zcopy": true, 00:39:00.281 "get_zone_info": false, 00:39:00.281 "zone_management": false, 00:39:00.281 "zone_append": false, 00:39:00.281 "compare": false, 00:39:00.281 "compare_and_write": false, 00:39:00.281 "abort": true, 00:39:00.281 "seek_hole": false, 00:39:00.281 "seek_data": false, 00:39:00.281 "copy": true, 00:39:00.281 "nvme_iov_md": false 00:39:00.281 }, 00:39:00.281 "memory_domains": [ 00:39:00.281 { 00:39:00.281 "dma_device_id": "system", 00:39:00.281 "dma_device_type": 1 00:39:00.281 }, 00:39:00.281 { 00:39:00.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:00.281 "dma_device_type": 2 00:39:00.281 } 00:39:00.281 ], 00:39:00.281 "driver_specific": {} 00:39:00.281 } 00:39:00.281 ] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.281 BaseBdev3 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.281 [ 00:39:00.281 { 00:39:00.281 "name": "BaseBdev3", 00:39:00.281 "aliases": [ 00:39:00.281 "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86" 00:39:00.281 ], 00:39:00.281 "product_name": "Malloc disk", 00:39:00.281 "block_size": 512, 00:39:00.281 "num_blocks": 65536, 00:39:00.281 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:00.281 "assigned_rate_limits": { 00:39:00.281 "rw_ios_per_sec": 0, 00:39:00.281 "rw_mbytes_per_sec": 0, 00:39:00.281 "r_mbytes_per_sec": 0, 00:39:00.281 "w_mbytes_per_sec": 0 00:39:00.281 }, 00:39:00.281 "claimed": false, 00:39:00.281 "zoned": false, 00:39:00.281 "supported_io_types": { 00:39:00.281 "read": true, 00:39:00.281 "write": true, 00:39:00.281 "unmap": true, 00:39:00.281 "flush": true, 00:39:00.281 "reset": true, 00:39:00.281 "nvme_admin": false, 00:39:00.281 "nvme_io": false, 00:39:00.281 "nvme_io_md": false, 00:39:00.281 "write_zeroes": true, 00:39:00.281 "zcopy": true, 00:39:00.281 "get_zone_info": false, 00:39:00.281 "zone_management": false, 00:39:00.281 "zone_append": false, 00:39:00.281 "compare": false, 00:39:00.281 "compare_and_write": false, 00:39:00.281 "abort": true, 00:39:00.281 "seek_hole": false, 00:39:00.281 "seek_data": false, 00:39:00.281 "copy": true, 00:39:00.281 "nvme_iov_md": false 00:39:00.281 }, 00:39:00.281 "memory_domains": [ 00:39:00.281 { 00:39:00.281 "dma_device_id": "system", 00:39:00.281 "dma_device_type": 1 00:39:00.281 }, 00:39:00.281 { 00:39:00.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:00.281 "dma_device_type": 2 00:39:00.281 } 00:39:00.281 ], 00:39:00.281 "driver_specific": {} 00:39:00.281 } 00:39:00.281 ] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.281 BaseBdev4 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:39:00.281 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.282 [ 00:39:00.282 { 00:39:00.282 "name": "BaseBdev4", 00:39:00.282 "aliases": [ 00:39:00.282 "4725c0e0-c877-4d0a-a1ab-daadab72ced1" 00:39:00.282 ], 00:39:00.282 "product_name": "Malloc disk", 00:39:00.282 "block_size": 512, 00:39:00.282 "num_blocks": 65536, 00:39:00.282 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:00.282 "assigned_rate_limits": { 00:39:00.282 "rw_ios_per_sec": 0, 00:39:00.282 "rw_mbytes_per_sec": 0, 00:39:00.282 "r_mbytes_per_sec": 0, 00:39:00.282 "w_mbytes_per_sec": 0 00:39:00.282 }, 00:39:00.282 "claimed": false, 00:39:00.282 "zoned": false, 00:39:00.282 "supported_io_types": { 00:39:00.282 "read": true, 00:39:00.282 "write": true, 00:39:00.282 "unmap": true, 00:39:00.282 "flush": true, 00:39:00.282 "reset": true, 00:39:00.282 "nvme_admin": false, 00:39:00.282 "nvme_io": false, 00:39:00.282 "nvme_io_md": false, 00:39:00.282 "write_zeroes": true, 00:39:00.282 "zcopy": true, 00:39:00.282 "get_zone_info": false, 00:39:00.282 "zone_management": false, 00:39:00.282 "zone_append": false, 00:39:00.282 "compare": false, 00:39:00.282 "compare_and_write": false, 00:39:00.282 "abort": true, 00:39:00.282 "seek_hole": false, 00:39:00.282 "seek_data": false, 00:39:00.282 "copy": true, 00:39:00.282 "nvme_iov_md": false 00:39:00.282 }, 00:39:00.282 "memory_domains": [ 00:39:00.282 { 00:39:00.282 "dma_device_id": "system", 00:39:00.282 "dma_device_type": 1 00:39:00.282 }, 00:39:00.282 { 00:39:00.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:00.282 "dma_device_type": 2 00:39:00.282 } 00:39:00.282 ], 00:39:00.282 "driver_specific": {} 00:39:00.282 } 00:39:00.282 ] 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.282 [2024-12-09 05:29:47.164337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:00.282 [2024-12-09 05:29:47.164577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:00.282 [2024-12-09 05:29:47.164707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:00.282 [2024-12-09 05:29:47.167426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:00.282 [2024-12-09 05:29:47.167647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:00.282 "name": "Existed_Raid", 00:39:00.282 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:00.282 "strip_size_kb": 64, 00:39:00.282 "state": "configuring", 00:39:00.282 "raid_level": "concat", 00:39:00.282 "superblock": true, 00:39:00.282 "num_base_bdevs": 4, 00:39:00.282 "num_base_bdevs_discovered": 3, 00:39:00.282 "num_base_bdevs_operational": 4, 00:39:00.282 "base_bdevs_list": [ 00:39:00.282 { 00:39:00.282 "name": "BaseBdev1", 00:39:00.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.282 "is_configured": false, 00:39:00.282 "data_offset": 0, 00:39:00.282 "data_size": 0 00:39:00.282 }, 00:39:00.282 { 00:39:00.282 "name": "BaseBdev2", 00:39:00.282 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:00.282 "is_configured": true, 00:39:00.282 "data_offset": 2048, 00:39:00.282 "data_size": 63488 00:39:00.282 }, 00:39:00.282 { 00:39:00.282 "name": "BaseBdev3", 00:39:00.282 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:00.282 "is_configured": true, 00:39:00.282 "data_offset": 2048, 00:39:00.282 "data_size": 63488 00:39:00.282 }, 00:39:00.282 { 00:39:00.282 "name": "BaseBdev4", 00:39:00.282 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:00.282 "is_configured": true, 00:39:00.282 "data_offset": 2048, 00:39:00.282 "data_size": 63488 00:39:00.282 } 00:39:00.282 ] 00:39:00.282 }' 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:00.282 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.849 [2024-12-09 05:29:47.708581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:00.849 "name": "Existed_Raid", 00:39:00.849 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:00.849 "strip_size_kb": 64, 00:39:00.849 "state": "configuring", 00:39:00.849 "raid_level": "concat", 00:39:00.849 "superblock": true, 00:39:00.849 "num_base_bdevs": 4, 00:39:00.849 "num_base_bdevs_discovered": 2, 00:39:00.849 "num_base_bdevs_operational": 4, 00:39:00.849 "base_bdevs_list": [ 00:39:00.849 { 00:39:00.849 "name": "BaseBdev1", 00:39:00.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.849 "is_configured": false, 00:39:00.849 "data_offset": 0, 00:39:00.849 "data_size": 0 00:39:00.849 }, 00:39:00.849 { 00:39:00.849 "name": null, 00:39:00.849 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:00.849 "is_configured": false, 00:39:00.849 "data_offset": 0, 00:39:00.849 "data_size": 63488 00:39:00.849 }, 00:39:00.849 { 00:39:00.849 "name": "BaseBdev3", 00:39:00.849 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:00.849 "is_configured": true, 00:39:00.849 "data_offset": 2048, 00:39:00.849 "data_size": 63488 00:39:00.849 }, 00:39:00.849 { 00:39:00.849 "name": "BaseBdev4", 00:39:00.849 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:00.849 "is_configured": true, 00:39:00.849 "data_offset": 2048, 00:39:00.849 "data_size": 63488 00:39:00.849 } 00:39:00.849 ] 00:39:00.849 }' 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:00.849 05:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.420 [2024-12-09 05:29:48.324879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:01.420 BaseBdev1 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.420 [ 00:39:01.420 { 00:39:01.420 "name": "BaseBdev1", 00:39:01.420 "aliases": [ 00:39:01.420 "9f70ce44-cea4-4e3c-9711-77a4a7356da6" 00:39:01.420 ], 00:39:01.420 "product_name": "Malloc disk", 00:39:01.420 "block_size": 512, 00:39:01.420 "num_blocks": 65536, 00:39:01.420 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:01.420 "assigned_rate_limits": { 00:39:01.420 "rw_ios_per_sec": 0, 00:39:01.420 "rw_mbytes_per_sec": 0, 00:39:01.420 "r_mbytes_per_sec": 0, 00:39:01.420 "w_mbytes_per_sec": 0 00:39:01.420 }, 00:39:01.420 "claimed": true, 00:39:01.420 "claim_type": "exclusive_write", 00:39:01.420 "zoned": false, 00:39:01.420 "supported_io_types": { 00:39:01.420 "read": true, 00:39:01.420 "write": true, 00:39:01.420 "unmap": true, 00:39:01.420 "flush": true, 00:39:01.420 "reset": true, 00:39:01.420 "nvme_admin": false, 00:39:01.420 "nvme_io": false, 00:39:01.420 "nvme_io_md": false, 00:39:01.420 "write_zeroes": true, 00:39:01.420 "zcopy": true, 00:39:01.420 "get_zone_info": false, 00:39:01.420 "zone_management": false, 00:39:01.420 "zone_append": false, 00:39:01.420 "compare": false, 00:39:01.420 "compare_and_write": false, 00:39:01.420 "abort": true, 00:39:01.420 "seek_hole": false, 00:39:01.420 "seek_data": false, 00:39:01.420 "copy": true, 00:39:01.420 "nvme_iov_md": false 00:39:01.420 }, 00:39:01.420 "memory_domains": [ 00:39:01.420 { 00:39:01.420 "dma_device_id": "system", 00:39:01.420 "dma_device_type": 1 00:39:01.420 }, 00:39:01.420 { 00:39:01.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:01.420 "dma_device_type": 2 00:39:01.420 } 00:39:01.420 ], 00:39:01.420 "driver_specific": {} 00:39:01.420 } 00:39:01.420 ] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:01.420 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.421 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.679 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:01.679 "name": "Existed_Raid", 00:39:01.679 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:01.679 "strip_size_kb": 64, 00:39:01.679 "state": "configuring", 00:39:01.679 "raid_level": "concat", 00:39:01.679 "superblock": true, 00:39:01.679 "num_base_bdevs": 4, 00:39:01.679 "num_base_bdevs_discovered": 3, 00:39:01.679 "num_base_bdevs_operational": 4, 00:39:01.679 "base_bdevs_list": [ 00:39:01.679 { 00:39:01.679 "name": "BaseBdev1", 00:39:01.679 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:01.679 "is_configured": true, 00:39:01.679 "data_offset": 2048, 00:39:01.679 "data_size": 63488 00:39:01.679 }, 00:39:01.679 { 00:39:01.679 "name": null, 00:39:01.679 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:01.679 "is_configured": false, 00:39:01.679 "data_offset": 0, 00:39:01.679 "data_size": 63488 00:39:01.679 }, 00:39:01.679 { 00:39:01.679 "name": "BaseBdev3", 00:39:01.679 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:01.679 "is_configured": true, 00:39:01.679 "data_offset": 2048, 00:39:01.679 "data_size": 63488 00:39:01.679 }, 00:39:01.679 { 00:39:01.679 "name": "BaseBdev4", 00:39:01.679 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:01.679 "is_configured": true, 00:39:01.679 "data_offset": 2048, 00:39:01.679 "data_size": 63488 00:39:01.679 } 00:39:01.679 ] 00:39:01.679 }' 00:39:01.679 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:01.679 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.938 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.938 [2024-12-09 05:29:48.905249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:02.197 "name": "Existed_Raid", 00:39:02.197 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:02.197 "strip_size_kb": 64, 00:39:02.197 "state": "configuring", 00:39:02.197 "raid_level": "concat", 00:39:02.197 "superblock": true, 00:39:02.197 "num_base_bdevs": 4, 00:39:02.197 "num_base_bdevs_discovered": 2, 00:39:02.197 "num_base_bdevs_operational": 4, 00:39:02.197 "base_bdevs_list": [ 00:39:02.197 { 00:39:02.197 "name": "BaseBdev1", 00:39:02.197 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:02.197 "is_configured": true, 00:39:02.197 "data_offset": 2048, 00:39:02.197 "data_size": 63488 00:39:02.197 }, 00:39:02.197 { 00:39:02.197 "name": null, 00:39:02.197 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:02.197 "is_configured": false, 00:39:02.197 "data_offset": 0, 00:39:02.197 "data_size": 63488 00:39:02.197 }, 00:39:02.197 { 00:39:02.197 "name": null, 00:39:02.197 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:02.197 "is_configured": false, 00:39:02.197 "data_offset": 0, 00:39:02.197 "data_size": 63488 00:39:02.197 }, 00:39:02.197 { 00:39:02.197 "name": "BaseBdev4", 00:39:02.197 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:02.197 "is_configured": true, 00:39:02.197 "data_offset": 2048, 00:39:02.197 "data_size": 63488 00:39:02.197 } 00:39:02.197 ] 00:39:02.197 }' 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:02.197 05:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:02.456 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:02.456 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.456 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.456 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:02.715 [2024-12-09 05:29:49.481460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:02.715 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:02.716 "name": "Existed_Raid", 00:39:02.716 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:02.716 "strip_size_kb": 64, 00:39:02.716 "state": "configuring", 00:39:02.716 "raid_level": "concat", 00:39:02.716 "superblock": true, 00:39:02.716 "num_base_bdevs": 4, 00:39:02.716 "num_base_bdevs_discovered": 3, 00:39:02.716 "num_base_bdevs_operational": 4, 00:39:02.716 "base_bdevs_list": [ 00:39:02.716 { 00:39:02.716 "name": "BaseBdev1", 00:39:02.716 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:02.716 "is_configured": true, 00:39:02.716 "data_offset": 2048, 00:39:02.716 "data_size": 63488 00:39:02.716 }, 00:39:02.716 { 00:39:02.716 "name": null, 00:39:02.716 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:02.716 "is_configured": false, 00:39:02.716 "data_offset": 0, 00:39:02.716 "data_size": 63488 00:39:02.716 }, 00:39:02.716 { 00:39:02.716 "name": "BaseBdev3", 00:39:02.716 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:02.716 "is_configured": true, 00:39:02.716 "data_offset": 2048, 00:39:02.716 "data_size": 63488 00:39:02.716 }, 00:39:02.716 { 00:39:02.716 "name": "BaseBdev4", 00:39:02.716 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:02.716 "is_configured": true, 00:39:02.716 "data_offset": 2048, 00:39:02.716 "data_size": 63488 00:39:02.716 } 00:39:02.716 ] 00:39:02.716 }' 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:02.716 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.283 [2024-12-09 05:29:50.069761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:03.283 "name": "Existed_Raid", 00:39:03.283 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:03.283 "strip_size_kb": 64, 00:39:03.283 "state": "configuring", 00:39:03.283 "raid_level": "concat", 00:39:03.283 "superblock": true, 00:39:03.283 "num_base_bdevs": 4, 00:39:03.283 "num_base_bdevs_discovered": 2, 00:39:03.283 "num_base_bdevs_operational": 4, 00:39:03.283 "base_bdevs_list": [ 00:39:03.283 { 00:39:03.283 "name": null, 00:39:03.283 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:03.283 "is_configured": false, 00:39:03.283 "data_offset": 0, 00:39:03.283 "data_size": 63488 00:39:03.283 }, 00:39:03.283 { 00:39:03.283 "name": null, 00:39:03.283 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:03.283 "is_configured": false, 00:39:03.283 "data_offset": 0, 00:39:03.283 "data_size": 63488 00:39:03.283 }, 00:39:03.283 { 00:39:03.283 "name": "BaseBdev3", 00:39:03.283 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:03.283 "is_configured": true, 00:39:03.283 "data_offset": 2048, 00:39:03.283 "data_size": 63488 00:39:03.283 }, 00:39:03.283 { 00:39:03.283 "name": "BaseBdev4", 00:39:03.283 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:03.283 "is_configured": true, 00:39:03.283 "data_offset": 2048, 00:39:03.283 "data_size": 63488 00:39:03.283 } 00:39:03.283 ] 00:39:03.283 }' 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:03.283 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.851 [2024-12-09 05:29:50.750593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:03.851 "name": "Existed_Raid", 00:39:03.851 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:03.851 "strip_size_kb": 64, 00:39:03.851 "state": "configuring", 00:39:03.851 "raid_level": "concat", 00:39:03.851 "superblock": true, 00:39:03.851 "num_base_bdevs": 4, 00:39:03.851 "num_base_bdevs_discovered": 3, 00:39:03.851 "num_base_bdevs_operational": 4, 00:39:03.851 "base_bdevs_list": [ 00:39:03.851 { 00:39:03.851 "name": null, 00:39:03.851 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:03.851 "is_configured": false, 00:39:03.851 "data_offset": 0, 00:39:03.851 "data_size": 63488 00:39:03.851 }, 00:39:03.851 { 00:39:03.851 "name": "BaseBdev2", 00:39:03.851 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:03.851 "is_configured": true, 00:39:03.851 "data_offset": 2048, 00:39:03.851 "data_size": 63488 00:39:03.851 }, 00:39:03.851 { 00:39:03.851 "name": "BaseBdev3", 00:39:03.851 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:03.851 "is_configured": true, 00:39:03.851 "data_offset": 2048, 00:39:03.851 "data_size": 63488 00:39:03.851 }, 00:39:03.851 { 00:39:03.851 "name": "BaseBdev4", 00:39:03.851 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:03.851 "is_configured": true, 00:39:03.851 "data_offset": 2048, 00:39:03.851 "data_size": 63488 00:39:03.851 } 00:39:03.851 ] 00:39:03.851 }' 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:03.851 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:39:04.425 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f70ce44-cea4-4e3c-9711-77a4a7356da6 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.684 [2024-12-09 05:29:51.443896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:39:04.684 NewBaseBdev 00:39:04.684 [2024-12-09 05:29:51.444385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:04.684 [2024-12-09 05:29:51.444409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:39:04.684 [2024-12-09 05:29:51.444790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:39:04.684 [2024-12-09 05:29:51.445019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:04.684 [2024-12-09 05:29:51.445040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:39:04.684 [2024-12-09 05:29:51.445201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.684 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.684 [ 00:39:04.684 { 00:39:04.684 "name": "NewBaseBdev", 00:39:04.684 "aliases": [ 00:39:04.684 "9f70ce44-cea4-4e3c-9711-77a4a7356da6" 00:39:04.684 ], 00:39:04.684 "product_name": "Malloc disk", 00:39:04.684 "block_size": 512, 00:39:04.684 "num_blocks": 65536, 00:39:04.684 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:04.684 "assigned_rate_limits": { 00:39:04.684 "rw_ios_per_sec": 0, 00:39:04.684 "rw_mbytes_per_sec": 0, 00:39:04.684 "r_mbytes_per_sec": 0, 00:39:04.684 "w_mbytes_per_sec": 0 00:39:04.684 }, 00:39:04.684 "claimed": true, 00:39:04.684 "claim_type": "exclusive_write", 00:39:04.684 "zoned": false, 00:39:04.684 "supported_io_types": { 00:39:04.684 "read": true, 00:39:04.684 "write": true, 00:39:04.684 "unmap": true, 00:39:04.684 "flush": true, 00:39:04.684 "reset": true, 00:39:04.684 "nvme_admin": false, 00:39:04.684 "nvme_io": false, 00:39:04.684 "nvme_io_md": false, 00:39:04.684 "write_zeroes": true, 00:39:04.684 "zcopy": true, 00:39:04.684 "get_zone_info": false, 00:39:04.684 "zone_management": false, 00:39:04.684 "zone_append": false, 00:39:04.684 "compare": false, 00:39:04.684 "compare_and_write": false, 00:39:04.684 "abort": true, 00:39:04.684 "seek_hole": false, 00:39:04.684 "seek_data": false, 00:39:04.684 "copy": true, 00:39:04.684 "nvme_iov_md": false 00:39:04.684 }, 00:39:04.684 "memory_domains": [ 00:39:04.684 { 00:39:04.684 "dma_device_id": "system", 00:39:04.684 "dma_device_type": 1 00:39:04.684 }, 00:39:04.684 { 00:39:04.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:04.684 "dma_device_type": 2 00:39:04.684 } 00:39:04.685 ], 00:39:04.685 "driver_specific": {} 00:39:04.685 } 00:39:04.685 ] 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:04.685 "name": "Existed_Raid", 00:39:04.685 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:04.685 "strip_size_kb": 64, 00:39:04.685 "state": "online", 00:39:04.685 "raid_level": "concat", 00:39:04.685 "superblock": true, 00:39:04.685 "num_base_bdevs": 4, 00:39:04.685 "num_base_bdevs_discovered": 4, 00:39:04.685 "num_base_bdevs_operational": 4, 00:39:04.685 "base_bdevs_list": [ 00:39:04.685 { 00:39:04.685 "name": "NewBaseBdev", 00:39:04.685 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:04.685 "is_configured": true, 00:39:04.685 "data_offset": 2048, 00:39:04.685 "data_size": 63488 00:39:04.685 }, 00:39:04.685 { 00:39:04.685 "name": "BaseBdev2", 00:39:04.685 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:04.685 "is_configured": true, 00:39:04.685 "data_offset": 2048, 00:39:04.685 "data_size": 63488 00:39:04.685 }, 00:39:04.685 { 00:39:04.685 "name": "BaseBdev3", 00:39:04.685 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:04.685 "is_configured": true, 00:39:04.685 "data_offset": 2048, 00:39:04.685 "data_size": 63488 00:39:04.685 }, 00:39:04.685 { 00:39:04.685 "name": "BaseBdev4", 00:39:04.685 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:04.685 "is_configured": true, 00:39:04.685 "data_offset": 2048, 00:39:04.685 "data_size": 63488 00:39:04.685 } 00:39:04.685 ] 00:39:04.685 }' 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:04.685 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:05.252 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.252 [2024-12-09 05:29:51.996569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:05.252 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.252 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:05.252 "name": "Existed_Raid", 00:39:05.252 "aliases": [ 00:39:05.252 "0311bc06-ed54-4ab9-ac46-bf333c59907a" 00:39:05.252 ], 00:39:05.252 "product_name": "Raid Volume", 00:39:05.252 "block_size": 512, 00:39:05.252 "num_blocks": 253952, 00:39:05.252 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:05.252 "assigned_rate_limits": { 00:39:05.252 "rw_ios_per_sec": 0, 00:39:05.252 "rw_mbytes_per_sec": 0, 00:39:05.252 "r_mbytes_per_sec": 0, 00:39:05.252 "w_mbytes_per_sec": 0 00:39:05.252 }, 00:39:05.252 "claimed": false, 00:39:05.252 "zoned": false, 00:39:05.252 "supported_io_types": { 00:39:05.252 "read": true, 00:39:05.252 "write": true, 00:39:05.252 "unmap": true, 00:39:05.252 "flush": true, 00:39:05.252 "reset": true, 00:39:05.252 "nvme_admin": false, 00:39:05.252 "nvme_io": false, 00:39:05.252 "nvme_io_md": false, 00:39:05.252 "write_zeroes": true, 00:39:05.252 "zcopy": false, 00:39:05.252 "get_zone_info": false, 00:39:05.252 "zone_management": false, 00:39:05.252 "zone_append": false, 00:39:05.252 "compare": false, 00:39:05.252 "compare_and_write": false, 00:39:05.252 "abort": false, 00:39:05.252 "seek_hole": false, 00:39:05.252 "seek_data": false, 00:39:05.252 "copy": false, 00:39:05.252 "nvme_iov_md": false 00:39:05.252 }, 00:39:05.252 "memory_domains": [ 00:39:05.252 { 00:39:05.252 "dma_device_id": "system", 00:39:05.252 "dma_device_type": 1 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.252 "dma_device_type": 2 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "system", 00:39:05.252 "dma_device_type": 1 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.252 "dma_device_type": 2 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "system", 00:39:05.252 "dma_device_type": 1 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.252 "dma_device_type": 2 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "system", 00:39:05.252 "dma_device_type": 1 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.252 "dma_device_type": 2 00:39:05.252 } 00:39:05.252 ], 00:39:05.252 "driver_specific": { 00:39:05.252 "raid": { 00:39:05.252 "uuid": "0311bc06-ed54-4ab9-ac46-bf333c59907a", 00:39:05.252 "strip_size_kb": 64, 00:39:05.252 "state": "online", 00:39:05.252 "raid_level": "concat", 00:39:05.252 "superblock": true, 00:39:05.252 "num_base_bdevs": 4, 00:39:05.252 "num_base_bdevs_discovered": 4, 00:39:05.252 "num_base_bdevs_operational": 4, 00:39:05.252 "base_bdevs_list": [ 00:39:05.252 { 00:39:05.252 "name": "NewBaseBdev", 00:39:05.252 "uuid": "9f70ce44-cea4-4e3c-9711-77a4a7356da6", 00:39:05.252 "is_configured": true, 00:39:05.252 "data_offset": 2048, 00:39:05.252 "data_size": 63488 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "name": "BaseBdev2", 00:39:05.252 "uuid": "466b9394-5424-48dc-8ec0-69d464cfde6c", 00:39:05.252 "is_configured": true, 00:39:05.252 "data_offset": 2048, 00:39:05.252 "data_size": 63488 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "name": "BaseBdev3", 00:39:05.252 "uuid": "f3c18c9d-b430-4038-a0ba-cc0d4cf98d86", 00:39:05.252 "is_configured": true, 00:39:05.252 "data_offset": 2048, 00:39:05.252 "data_size": 63488 00:39:05.252 }, 00:39:05.252 { 00:39:05.252 "name": "BaseBdev4", 00:39:05.252 "uuid": "4725c0e0-c877-4d0a-a1ab-daadab72ced1", 00:39:05.252 "is_configured": true, 00:39:05.252 "data_offset": 2048, 00:39:05.252 "data_size": 63488 00:39:05.252 } 00:39:05.252 ] 00:39:05.252 } 00:39:05.252 } 00:39:05.252 }' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:39:05.253 BaseBdev2 00:39:05.253 BaseBdev3 00:39:05.253 BaseBdev4' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.253 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.511 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:05.512 [2024-12-09 05:29:52.372206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:05.512 [2024-12-09 05:29:52.372245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:05.512 [2024-12-09 05:29:52.372381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:05.512 [2024-12-09 05:29:52.372499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:05.512 [2024-12-09 05:29:52.372526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72147 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72147 ']' 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72147 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72147 00:39:05.512 killing process with pid 72147 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72147' 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72147 00:39:05.512 [2024-12-09 05:29:52.413069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:05.512 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72147 00:39:06.089 [2024-12-09 05:29:52.755550] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:07.040 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:39:07.040 00:39:07.041 real 0m12.931s 00:39:07.041 user 0m21.247s 00:39:07.041 sys 0m1.989s 00:39:07.041 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.041 ************************************ 00:39:07.041 END TEST raid_state_function_test_sb 00:39:07.041 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:07.041 ************************************ 00:39:07.041 05:29:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:39:07.041 05:29:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:07.041 05:29:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.041 05:29:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:07.041 ************************************ 00:39:07.041 START TEST raid_superblock_test 00:39:07.041 ************************************ 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72828 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72828 00:39:07.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72828 ']' 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.041 05:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:07.298 [2024-12-09 05:29:54.067060] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:07.299 [2024-12-09 05:29:54.067244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72828 ] 00:39:07.299 [2024-12-09 05:29:54.248211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.557 [2024-12-09 05:29:54.368117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.815 [2024-12-09 05:29:54.562445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:07.815 [2024-12-09 05:29:54.562515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:39:08.072 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.073 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 malloc1 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 [2024-12-09 05:29:55.062444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:08.331 [2024-12-09 05:29:55.062822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.331 [2024-12-09 05:29:55.062865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:08.331 [2024-12-09 05:29:55.062883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.331 [2024-12-09 05:29:55.065677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.331 [2024-12-09 05:29:55.065718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:08.331 pt1 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 malloc2 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 [2024-12-09 05:29:55.117701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:08.331 [2024-12-09 05:29:55.117997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.331 [2024-12-09 05:29:55.118059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:08.331 [2024-12-09 05:29:55.118076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.331 [2024-12-09 05:29:55.121062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.331 [2024-12-09 05:29:55.121140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:08.331 pt2 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 malloc3 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 [2024-12-09 05:29:55.181035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:08.331 [2024-12-09 05:29:55.181286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.331 [2024-12-09 05:29:55.181331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:08.331 [2024-12-09 05:29:55.181347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.331 [2024-12-09 05:29:55.184063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.331 [2024-12-09 05:29:55.184101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:08.331 pt3 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 malloc4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 [2024-12-09 05:29:55.231423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:08.331 [2024-12-09 05:29:55.231503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.331 [2024-12-09 05:29:55.231534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:08.331 [2024-12-09 05:29:55.231547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.331 [2024-12-09 05:29:55.234137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.331 [2024-12-09 05:29:55.234360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:08.331 pt4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 [2024-12-09 05:29:55.239456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:08.331 [2024-12-09 05:29:55.241728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:08.331 [2024-12-09 05:29:55.242019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:08.331 [2024-12-09 05:29:55.242130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:08.331 [2024-12-09 05:29:55.242457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:08.331 [2024-12-09 05:29:55.242581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:39:08.331 [2024-12-09 05:29:55.242939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:08.331 [2024-12-09 05:29:55.243166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:08.331 [2024-12-09 05:29:55.243186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:08.331 [2024-12-09 05:29:55.243390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.331 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:08.331 "name": "raid_bdev1", 00:39:08.331 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:08.331 "strip_size_kb": 64, 00:39:08.331 "state": "online", 00:39:08.331 "raid_level": "concat", 00:39:08.331 "superblock": true, 00:39:08.331 "num_base_bdevs": 4, 00:39:08.331 "num_base_bdevs_discovered": 4, 00:39:08.331 "num_base_bdevs_operational": 4, 00:39:08.331 "base_bdevs_list": [ 00:39:08.331 { 00:39:08.331 "name": "pt1", 00:39:08.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:08.331 "is_configured": true, 00:39:08.331 "data_offset": 2048, 00:39:08.331 "data_size": 63488 00:39:08.331 }, 00:39:08.331 { 00:39:08.331 "name": "pt2", 00:39:08.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:08.331 "is_configured": true, 00:39:08.331 "data_offset": 2048, 00:39:08.331 "data_size": 63488 00:39:08.331 }, 00:39:08.331 { 00:39:08.331 "name": "pt3", 00:39:08.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:08.331 "is_configured": true, 00:39:08.332 "data_offset": 2048, 00:39:08.332 "data_size": 63488 00:39:08.332 }, 00:39:08.332 { 00:39:08.332 "name": "pt4", 00:39:08.332 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:08.332 "is_configured": true, 00:39:08.332 "data_offset": 2048, 00:39:08.332 "data_size": 63488 00:39:08.332 } 00:39:08.332 ] 00:39:08.332 }' 00:39:08.332 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:08.332 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:08.898 [2024-12-09 05:29:55.787981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:08.898 "name": "raid_bdev1", 00:39:08.898 "aliases": [ 00:39:08.898 "41adf3ea-ea0c-4ee6-b041-9c724e905eac" 00:39:08.898 ], 00:39:08.898 "product_name": "Raid Volume", 00:39:08.898 "block_size": 512, 00:39:08.898 "num_blocks": 253952, 00:39:08.898 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:08.898 "assigned_rate_limits": { 00:39:08.898 "rw_ios_per_sec": 0, 00:39:08.898 "rw_mbytes_per_sec": 0, 00:39:08.898 "r_mbytes_per_sec": 0, 00:39:08.898 "w_mbytes_per_sec": 0 00:39:08.898 }, 00:39:08.898 "claimed": false, 00:39:08.898 "zoned": false, 00:39:08.898 "supported_io_types": { 00:39:08.898 "read": true, 00:39:08.898 "write": true, 00:39:08.898 "unmap": true, 00:39:08.898 "flush": true, 00:39:08.898 "reset": true, 00:39:08.898 "nvme_admin": false, 00:39:08.898 "nvme_io": false, 00:39:08.898 "nvme_io_md": false, 00:39:08.898 "write_zeroes": true, 00:39:08.898 "zcopy": false, 00:39:08.898 "get_zone_info": false, 00:39:08.898 "zone_management": false, 00:39:08.898 "zone_append": false, 00:39:08.898 "compare": false, 00:39:08.898 "compare_and_write": false, 00:39:08.898 "abort": false, 00:39:08.898 "seek_hole": false, 00:39:08.898 "seek_data": false, 00:39:08.898 "copy": false, 00:39:08.898 "nvme_iov_md": false 00:39:08.898 }, 00:39:08.898 "memory_domains": [ 00:39:08.898 { 00:39:08.898 "dma_device_id": "system", 00:39:08.898 "dma_device_type": 1 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.898 "dma_device_type": 2 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "system", 00:39:08.898 "dma_device_type": 1 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.898 "dma_device_type": 2 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "system", 00:39:08.898 "dma_device_type": 1 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.898 "dma_device_type": 2 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "system", 00:39:08.898 "dma_device_type": 1 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.898 "dma_device_type": 2 00:39:08.898 } 00:39:08.898 ], 00:39:08.898 "driver_specific": { 00:39:08.898 "raid": { 00:39:08.898 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:08.898 "strip_size_kb": 64, 00:39:08.898 "state": "online", 00:39:08.898 "raid_level": "concat", 00:39:08.898 "superblock": true, 00:39:08.898 "num_base_bdevs": 4, 00:39:08.898 "num_base_bdevs_discovered": 4, 00:39:08.898 "num_base_bdevs_operational": 4, 00:39:08.898 "base_bdevs_list": [ 00:39:08.898 { 00:39:08.898 "name": "pt1", 00:39:08.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:08.898 "is_configured": true, 00:39:08.898 "data_offset": 2048, 00:39:08.898 "data_size": 63488 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "name": "pt2", 00:39:08.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:08.898 "is_configured": true, 00:39:08.898 "data_offset": 2048, 00:39:08.898 "data_size": 63488 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "name": "pt3", 00:39:08.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:08.898 "is_configured": true, 00:39:08.898 "data_offset": 2048, 00:39:08.898 "data_size": 63488 00:39:08.898 }, 00:39:08.898 { 00:39:08.898 "name": "pt4", 00:39:08.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:08.898 "is_configured": true, 00:39:08.898 "data_offset": 2048, 00:39:08.898 "data_size": 63488 00:39:08.898 } 00:39:08.898 ] 00:39:08.898 } 00:39:08.898 } 00:39:08.898 }' 00:39:08.898 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:09.155 pt2 00:39:09.155 pt3 00:39:09.155 pt4' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:09.155 05:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.155 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.412 [2024-12-09 05:29:56.168067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=41adf3ea-ea0c-4ee6-b041-9c724e905eac 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 41adf3ea-ea0c-4ee6-b041-9c724e905eac ']' 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.412 [2024-12-09 05:29:56.219654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:09.412 [2024-12-09 05:29:56.219853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:09.412 [2024-12-09 05:29:56.220074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:09.412 [2024-12-09 05:29:56.220293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:09.412 [2024-12-09 05:29:56.220428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.412 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.413 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.413 [2024-12-09 05:29:56.379709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:09.672 request: 00:39:09.672 { 00:39:09.672 "name": "raid_bdev1", 00:39:09.672 "raid_level": "concat", 00:39:09.672 [2024-12-09 05:29:56.383016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:09.672 [2024-12-09 05:29:56.383075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:39:09.672 [2024-12-09 05:29:56.383138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:39:09.672 [2024-12-09 05:29:56.383201] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:09.672 [2024-12-09 05:29:56.383277] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:09.672 [2024-12-09 05:29:56.383306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:39:09.672 [2024-12-09 05:29:56.383334] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:39:09.672 [2024-12-09 05:29:56.383352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:09.672 [2024-12-09 05:29:56.383365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:39:09.672 "base_bdevs": [ 00:39:09.672 "malloc1", 00:39:09.672 "malloc2", 00:39:09.672 "malloc3", 00:39:09.672 "malloc4" 00:39:09.672 ], 00:39:09.672 "strip_size_kb": 64, 00:39:09.672 "superblock": false, 00:39:09.672 "method": "bdev_raid_create", 00:39:09.672 "req_id": 1 00:39:09.672 } 00:39:09.672 Got JSON-RPC error response 00:39:09.672 response: 00:39:09.672 { 00:39:09.672 "code": -17, 00:39:09.672 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:09.672 } 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.672 [2024-12-09 05:29:56.447868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:09.672 [2024-12-09 05:29:56.448096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.672 [2024-12-09 05:29:56.448180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:09.672 [2024-12-09 05:29:56.448303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.672 [2024-12-09 05:29:56.451298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.672 [2024-12-09 05:29:56.451498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:09.672 [2024-12-09 05:29:56.451695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:09.672 pt1 00:39:09.672 [2024-12-09 05:29:56.451908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:09.672 "name": "raid_bdev1", 00:39:09.672 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:09.672 "strip_size_kb": 64, 00:39:09.672 "state": "configuring", 00:39:09.672 "raid_level": "concat", 00:39:09.672 "superblock": true, 00:39:09.672 "num_base_bdevs": 4, 00:39:09.672 "num_base_bdevs_discovered": 1, 00:39:09.672 "num_base_bdevs_operational": 4, 00:39:09.672 "base_bdevs_list": [ 00:39:09.672 { 00:39:09.672 "name": "pt1", 00:39:09.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:09.672 "is_configured": true, 00:39:09.672 "data_offset": 2048, 00:39:09.672 "data_size": 63488 00:39:09.672 }, 00:39:09.672 { 00:39:09.672 "name": null, 00:39:09.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:09.672 "is_configured": false, 00:39:09.672 "data_offset": 2048, 00:39:09.672 "data_size": 63488 00:39:09.672 }, 00:39:09.672 { 00:39:09.672 "name": null, 00:39:09.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:09.672 "is_configured": false, 00:39:09.672 "data_offset": 2048, 00:39:09.672 "data_size": 63488 00:39:09.672 }, 00:39:09.672 { 00:39:09.672 "name": null, 00:39:09.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:09.672 "is_configured": false, 00:39:09.672 "data_offset": 2048, 00:39:09.672 "data_size": 63488 00:39:09.672 } 00:39:09.672 ] 00:39:09.672 }' 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:09.672 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.239 [2024-12-09 05:29:56.984331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:10.239 [2024-12-09 05:29:56.984628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.239 [2024-12-09 05:29:56.984663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:10.239 [2024-12-09 05:29:56.984681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.239 [2024-12-09 05:29:56.985380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.239 [2024-12-09 05:29:56.985461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:10.239 [2024-12-09 05:29:56.985584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:10.239 [2024-12-09 05:29:56.985638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:10.239 pt2 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.239 [2024-12-09 05:29:56.992328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:10.239 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:10.240 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:10.240 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:10.240 05:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:10.240 "name": "raid_bdev1", 00:39:10.240 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:10.240 "strip_size_kb": 64, 00:39:10.240 "state": "configuring", 00:39:10.240 "raid_level": "concat", 00:39:10.240 "superblock": true, 00:39:10.240 "num_base_bdevs": 4, 00:39:10.240 "num_base_bdevs_discovered": 1, 00:39:10.240 "num_base_bdevs_operational": 4, 00:39:10.240 "base_bdevs_list": [ 00:39:10.240 { 00:39:10.240 "name": "pt1", 00:39:10.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:10.240 "is_configured": true, 00:39:10.240 "data_offset": 2048, 00:39:10.240 "data_size": 63488 00:39:10.240 }, 00:39:10.240 { 00:39:10.240 "name": null, 00:39:10.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:10.240 "is_configured": false, 00:39:10.240 "data_offset": 0, 00:39:10.240 "data_size": 63488 00:39:10.240 }, 00:39:10.240 { 00:39:10.240 "name": null, 00:39:10.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:10.240 "is_configured": false, 00:39:10.240 "data_offset": 2048, 00:39:10.240 "data_size": 63488 00:39:10.240 }, 00:39:10.240 { 00:39:10.240 "name": null, 00:39:10.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:10.240 "is_configured": false, 00:39:10.240 "data_offset": 2048, 00:39:10.240 "data_size": 63488 00:39:10.240 } 00:39:10.240 ] 00:39:10.240 }' 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:10.240 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.807 [2024-12-09 05:29:57.528464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:10.807 [2024-12-09 05:29:57.528711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.807 [2024-12-09 05:29:57.528810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:39:10.807 [2024-12-09 05:29:57.529008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.807 [2024-12-09 05:29:57.529654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.807 [2024-12-09 05:29:57.529686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:10.807 [2024-12-09 05:29:57.529830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:10.807 [2024-12-09 05:29:57.529860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:10.807 pt2 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.807 [2024-12-09 05:29:57.536432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:10.807 [2024-12-09 05:29:57.536650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.807 [2024-12-09 05:29:57.536729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:39:10.807 [2024-12-09 05:29:57.537038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.807 [2024-12-09 05:29:57.537577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.807 [2024-12-09 05:29:57.537742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:10.807 [2024-12-09 05:29:57.537997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:39:10.807 [2024-12-09 05:29:57.538151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:10.807 pt3 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.807 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.808 [2024-12-09 05:29:57.544412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:10.808 [2024-12-09 05:29:57.544587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.808 [2024-12-09 05:29:57.544652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:39:10.808 [2024-12-09 05:29:57.544746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.808 [2024-12-09 05:29:57.545281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.808 [2024-12-09 05:29:57.545318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:10.808 [2024-12-09 05:29:57.545408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:39:10.808 [2024-12-09 05:29:57.545438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:10.808 [2024-12-09 05:29:57.545604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:10.808 [2024-12-09 05:29:57.545618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:39:10.808 [2024-12-09 05:29:57.545981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:10.808 [2024-12-09 05:29:57.546230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:10.808 [2024-12-09 05:29:57.546267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:39:10.808 [2024-12-09 05:29:57.546419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:10.808 pt4 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:10.808 "name": "raid_bdev1", 00:39:10.808 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:10.808 "strip_size_kb": 64, 00:39:10.808 "state": "online", 00:39:10.808 "raid_level": "concat", 00:39:10.808 "superblock": true, 00:39:10.808 "num_base_bdevs": 4, 00:39:10.808 "num_base_bdevs_discovered": 4, 00:39:10.808 "num_base_bdevs_operational": 4, 00:39:10.808 "base_bdevs_list": [ 00:39:10.808 { 00:39:10.808 "name": "pt1", 00:39:10.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:10.808 "is_configured": true, 00:39:10.808 "data_offset": 2048, 00:39:10.808 "data_size": 63488 00:39:10.808 }, 00:39:10.808 { 00:39:10.808 "name": "pt2", 00:39:10.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:10.808 "is_configured": true, 00:39:10.808 "data_offset": 2048, 00:39:10.808 "data_size": 63488 00:39:10.808 }, 00:39:10.808 { 00:39:10.808 "name": "pt3", 00:39:10.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:10.808 "is_configured": true, 00:39:10.808 "data_offset": 2048, 00:39:10.808 "data_size": 63488 00:39:10.808 }, 00:39:10.808 { 00:39:10.808 "name": "pt4", 00:39:10.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:10.808 "is_configured": true, 00:39:10.808 "data_offset": 2048, 00:39:10.808 "data_size": 63488 00:39:10.808 } 00:39:10.808 ] 00:39:10.808 }' 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:10.808 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.376 [2024-12-09 05:29:58.089029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:11.376 "name": "raid_bdev1", 00:39:11.376 "aliases": [ 00:39:11.376 "41adf3ea-ea0c-4ee6-b041-9c724e905eac" 00:39:11.376 ], 00:39:11.376 "product_name": "Raid Volume", 00:39:11.376 "block_size": 512, 00:39:11.376 "num_blocks": 253952, 00:39:11.376 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:11.376 "assigned_rate_limits": { 00:39:11.376 "rw_ios_per_sec": 0, 00:39:11.376 "rw_mbytes_per_sec": 0, 00:39:11.376 "r_mbytes_per_sec": 0, 00:39:11.376 "w_mbytes_per_sec": 0 00:39:11.376 }, 00:39:11.376 "claimed": false, 00:39:11.376 "zoned": false, 00:39:11.376 "supported_io_types": { 00:39:11.376 "read": true, 00:39:11.376 "write": true, 00:39:11.376 "unmap": true, 00:39:11.376 "flush": true, 00:39:11.376 "reset": true, 00:39:11.376 "nvme_admin": false, 00:39:11.376 "nvme_io": false, 00:39:11.376 "nvme_io_md": false, 00:39:11.376 "write_zeroes": true, 00:39:11.376 "zcopy": false, 00:39:11.376 "get_zone_info": false, 00:39:11.376 "zone_management": false, 00:39:11.376 "zone_append": false, 00:39:11.376 "compare": false, 00:39:11.376 "compare_and_write": false, 00:39:11.376 "abort": false, 00:39:11.376 "seek_hole": false, 00:39:11.376 "seek_data": false, 00:39:11.376 "copy": false, 00:39:11.376 "nvme_iov_md": false 00:39:11.376 }, 00:39:11.376 "memory_domains": [ 00:39:11.376 { 00:39:11.376 "dma_device_id": "system", 00:39:11.376 "dma_device_type": 1 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:11.376 "dma_device_type": 2 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "system", 00:39:11.376 "dma_device_type": 1 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:11.376 "dma_device_type": 2 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "system", 00:39:11.376 "dma_device_type": 1 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:11.376 "dma_device_type": 2 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "system", 00:39:11.376 "dma_device_type": 1 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:11.376 "dma_device_type": 2 00:39:11.376 } 00:39:11.376 ], 00:39:11.376 "driver_specific": { 00:39:11.376 "raid": { 00:39:11.376 "uuid": "41adf3ea-ea0c-4ee6-b041-9c724e905eac", 00:39:11.376 "strip_size_kb": 64, 00:39:11.376 "state": "online", 00:39:11.376 "raid_level": "concat", 00:39:11.376 "superblock": true, 00:39:11.376 "num_base_bdevs": 4, 00:39:11.376 "num_base_bdevs_discovered": 4, 00:39:11.376 "num_base_bdevs_operational": 4, 00:39:11.376 "base_bdevs_list": [ 00:39:11.376 { 00:39:11.376 "name": "pt1", 00:39:11.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:11.376 "is_configured": true, 00:39:11.376 "data_offset": 2048, 00:39:11.376 "data_size": 63488 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "name": "pt2", 00:39:11.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:11.376 "is_configured": true, 00:39:11.376 "data_offset": 2048, 00:39:11.376 "data_size": 63488 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "name": "pt3", 00:39:11.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:11.376 "is_configured": true, 00:39:11.376 "data_offset": 2048, 00:39:11.376 "data_size": 63488 00:39:11.376 }, 00:39:11.376 { 00:39:11.376 "name": "pt4", 00:39:11.376 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:11.376 "is_configured": true, 00:39:11.376 "data_offset": 2048, 00:39:11.376 "data_size": 63488 00:39:11.376 } 00:39:11.376 ] 00:39:11.376 } 00:39:11.376 } 00:39:11.376 }' 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:11.376 pt2 00:39:11.376 pt3 00:39:11.376 pt4' 00:39:11.376 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:11.377 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:39:11.636 [2024-12-09 05:29:58.481037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 41adf3ea-ea0c-4ee6-b041-9c724e905eac '!=' 41adf3ea-ea0c-4ee6-b041-9c724e905eac ']' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72828 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72828 ']' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72828 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72828 00:39:11.636 killing process with pid 72828 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72828' 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72828 00:39:11.636 [2024-12-09 05:29:58.564028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:11.636 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72828 00:39:11.636 [2024-12-09 05:29:58.564113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:11.636 [2024-12-09 05:29:58.564271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:11.636 [2024-12-09 05:29:58.564301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:39:12.205 [2024-12-09 05:29:58.904821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:13.142 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:39:13.142 00:39:13.142 real 0m6.154s 00:39:13.142 user 0m9.124s 00:39:13.142 sys 0m0.980s 00:39:13.142 ************************************ 00:39:13.142 END TEST raid_superblock_test 00:39:13.142 ************************************ 00:39:13.142 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.142 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:13.401 05:30:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:39:13.401 05:30:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:13.401 05:30:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:13.401 05:30:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:13.401 ************************************ 00:39:13.401 START TEST raid_read_error_test 00:39:13.401 ************************************ 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SNhAhJli5y 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73093 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73093 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:39:13.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73093 ']' 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.401 05:30:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:13.401 [2024-12-09 05:30:00.300541] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:13.401 [2024-12-09 05:30:00.300808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:39:13.660 [2024-12-09 05:30:00.489294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.919 [2024-12-09 05:30:00.639646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.919 [2024-12-09 05:30:00.864365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:13.919 [2024-12-09 05:30:00.864413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.487 BaseBdev1_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.487 true 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.487 [2024-12-09 05:30:01.368848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:39:14.487 [2024-12-09 05:30:01.368955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:14.487 [2024-12-09 05:30:01.368988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:39:14.487 [2024-12-09 05:30:01.369006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:14.487 [2024-12-09 05:30:01.372007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:14.487 BaseBdev1 00:39:14.487 [2024-12-09 05:30:01.372204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.487 BaseBdev2_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.487 true 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.487 [2024-12-09 05:30:01.427207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:39:14.487 [2024-12-09 05:30:01.427580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:14.487 [2024-12-09 05:30:01.427620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:39:14.487 [2024-12-09 05:30:01.427640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:14.487 [2024-12-09 05:30:01.430746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:14.487 BaseBdev2 00:39:14.487 [2024-12-09 05:30:01.430992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.487 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 BaseBdev3_malloc 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 true 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 [2024-12-09 05:30:01.496020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:39:14.746 [2024-12-09 05:30:01.496398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:14.746 [2024-12-09 05:30:01.496472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:39:14.746 [2024-12-09 05:30:01.496594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:14.746 [2024-12-09 05:30:01.499844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:14.746 BaseBdev3 00:39:14.746 [2024-12-09 05:30:01.500086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 BaseBdev4_malloc 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 true 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 [2024-12-09 05:30:01.558158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:39:14.746 [2024-12-09 05:30:01.558271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:14.746 [2024-12-09 05:30:01.558317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:14.746 [2024-12-09 05:30:01.558336] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:14.746 [2024-12-09 05:30:01.561219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:14.746 BaseBdev4 00:39:14.746 [2024-12-09 05:30:01.561425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 [2024-12-09 05:30:01.566435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:14.746 [2024-12-09 05:30:01.569108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:14.746 [2024-12-09 05:30:01.569381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:14.746 [2024-12-09 05:30:01.569524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:14.746 [2024-12-09 05:30:01.569884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:39:14.746 [2024-12-09 05:30:01.570014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:39:14.746 [2024-12-09 05:30:01.570428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:39:14.746 [2024-12-09 05:30:01.570834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:39:14.746 [2024-12-09 05:30:01.570956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:39:14.746 [2024-12-09 05:30:01.571248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:14.746 "name": "raid_bdev1", 00:39:14.746 "uuid": "cddb9e44-9347-4aac-aafa-2ad56eb9838f", 00:39:14.746 "strip_size_kb": 64, 00:39:14.746 "state": "online", 00:39:14.746 "raid_level": "concat", 00:39:14.746 "superblock": true, 00:39:14.746 "num_base_bdevs": 4, 00:39:14.746 "num_base_bdevs_discovered": 4, 00:39:14.746 "num_base_bdevs_operational": 4, 00:39:14.746 "base_bdevs_list": [ 00:39:14.746 { 00:39:14.746 "name": "BaseBdev1", 00:39:14.746 "uuid": "01815e67-9629-5768-a3fa-7c0397f44e87", 00:39:14.746 "is_configured": true, 00:39:14.746 "data_offset": 2048, 00:39:14.746 "data_size": 63488 00:39:14.746 }, 00:39:14.746 { 00:39:14.746 "name": "BaseBdev2", 00:39:14.746 "uuid": "04fe3f1e-e57a-5c25-9531-0cb1d7b61fb3", 00:39:14.746 "is_configured": true, 00:39:14.746 "data_offset": 2048, 00:39:14.746 "data_size": 63488 00:39:14.746 }, 00:39:14.746 { 00:39:14.746 "name": "BaseBdev3", 00:39:14.746 "uuid": "328e9313-641e-5d37-99ab-af6103f392aa", 00:39:14.746 "is_configured": true, 00:39:14.746 "data_offset": 2048, 00:39:14.746 "data_size": 63488 00:39:14.746 }, 00:39:14.746 { 00:39:14.746 "name": "BaseBdev4", 00:39:14.746 "uuid": "a928e87e-9581-5804-8c53-7d21a5ab8899", 00:39:14.746 "is_configured": true, 00:39:14.746 "data_offset": 2048, 00:39:14.746 "data_size": 63488 00:39:14.746 } 00:39:14.746 ] 00:39:14.746 }' 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:14.746 05:30:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:15.313 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:39:15.313 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:15.313 [2024-12-09 05:30:02.196863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:16.285 "name": "raid_bdev1", 00:39:16.285 "uuid": "cddb9e44-9347-4aac-aafa-2ad56eb9838f", 00:39:16.285 "strip_size_kb": 64, 00:39:16.285 "state": "online", 00:39:16.285 "raid_level": "concat", 00:39:16.285 "superblock": true, 00:39:16.285 "num_base_bdevs": 4, 00:39:16.285 "num_base_bdevs_discovered": 4, 00:39:16.285 "num_base_bdevs_operational": 4, 00:39:16.285 "base_bdevs_list": [ 00:39:16.285 { 00:39:16.285 "name": "BaseBdev1", 00:39:16.285 "uuid": "01815e67-9629-5768-a3fa-7c0397f44e87", 00:39:16.285 "is_configured": true, 00:39:16.285 "data_offset": 2048, 00:39:16.285 "data_size": 63488 00:39:16.285 }, 00:39:16.285 { 00:39:16.285 "name": "BaseBdev2", 00:39:16.285 "uuid": "04fe3f1e-e57a-5c25-9531-0cb1d7b61fb3", 00:39:16.285 "is_configured": true, 00:39:16.285 "data_offset": 2048, 00:39:16.285 "data_size": 63488 00:39:16.285 }, 00:39:16.285 { 00:39:16.285 "name": "BaseBdev3", 00:39:16.285 "uuid": "328e9313-641e-5d37-99ab-af6103f392aa", 00:39:16.285 "is_configured": true, 00:39:16.285 "data_offset": 2048, 00:39:16.285 "data_size": 63488 00:39:16.285 }, 00:39:16.285 { 00:39:16.285 "name": "BaseBdev4", 00:39:16.285 "uuid": "a928e87e-9581-5804-8c53-7d21a5ab8899", 00:39:16.285 "is_configured": true, 00:39:16.285 "data_offset": 2048, 00:39:16.285 "data_size": 63488 00:39:16.285 } 00:39:16.285 ] 00:39:16.285 }' 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:16.285 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:16.852 [2024-12-09 05:30:03.627858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:16.852 [2024-12-09 05:30:03.627922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:16.852 { 00:39:16.852 "results": [ 00:39:16.852 { 00:39:16.852 "job": "raid_bdev1", 00:39:16.852 "core_mask": "0x1", 00:39:16.852 "workload": "randrw", 00:39:16.852 "percentage": 50, 00:39:16.852 "status": "finished", 00:39:16.852 "queue_depth": 1, 00:39:16.852 "io_size": 131072, 00:39:16.852 "runtime": 1.428149, 00:39:16.852 "iops": 9734.978633181832, 00:39:16.852 "mibps": 1216.872329147729, 00:39:16.852 "io_failed": 1, 00:39:16.852 "io_timeout": 0, 00:39:16.852 "avg_latency_us": 143.89487603305787, 00:39:16.852 "min_latency_us": 37.70181818181818, 00:39:16.852 "max_latency_us": 1772.4509090909091 00:39:16.852 } 00:39:16.852 ], 00:39:16.852 "core_count": 1 00:39:16.852 } 00:39:16.852 [2024-12-09 05:30:03.631562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:16.852 [2024-12-09 05:30:03.631653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:16.852 [2024-12-09 05:30:03.631715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:16.852 [2024-12-09 05:30:03.631737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73093 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73093 ']' 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73093 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73093 00:39:16.852 killing process with pid 73093 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73093' 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73093 00:39:16.852 [2024-12-09 05:30:03.670553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:16.852 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73093 00:39:17.110 [2024-12-09 05:30:03.958623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SNhAhJli5y 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:39:18.483 00:39:18.483 real 0m5.009s 00:39:18.483 user 0m6.049s 00:39:18.483 sys 0m0.696s 00:39:18.483 ************************************ 00:39:18.483 END TEST raid_read_error_test 00:39:18.483 ************************************ 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.483 05:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:18.483 05:30:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:39:18.483 05:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:18.483 05:30:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.483 05:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:18.483 ************************************ 00:39:18.483 START TEST raid_write_error_test 00:39:18.483 ************************************ 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VEjOgjXTiX 00:39:18.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73244 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73244 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73244 ']' 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.483 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:18.483 [2024-12-09 05:30:05.380178] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:18.483 [2024-12-09 05:30:05.381071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73244 ] 00:39:18.741 [2024-12-09 05:30:05.569481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.741 [2024-12-09 05:30:05.703161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.999 [2024-12-09 05:30:05.911278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:18.999 [2024-12-09 05:30:05.911680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:19.565 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:19.565 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 BaseBdev1_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 true 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 [2024-12-09 05:30:06.380216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:39:19.566 [2024-12-09 05:30:06.380297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:19.566 [2024-12-09 05:30:06.380350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:39:19.566 [2024-12-09 05:30:06.380375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:19.566 [2024-12-09 05:30:06.383549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:19.566 BaseBdev1 00:39:19.566 [2024-12-09 05:30:06.383824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 BaseBdev2_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 true 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 [2024-12-09 05:30:06.444326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:39:19.566 [2024-12-09 05:30:06.444438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:19.566 [2024-12-09 05:30:06.444467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:39:19.566 [2024-12-09 05:30:06.444483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:19.566 [2024-12-09 05:30:06.447474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:19.566 [2024-12-09 05:30:06.447534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:19.566 BaseBdev2 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 BaseBdev3_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 true 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.566 [2024-12-09 05:30:06.514976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:39:19.566 [2024-12-09 05:30:06.515369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:19.566 [2024-12-09 05:30:06.515407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:39:19.566 [2024-12-09 05:30:06.515425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:19.566 [2024-12-09 05:30:06.518471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:19.566 [2024-12-09 05:30:06.518521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:19.566 BaseBdev3 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.566 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.825 BaseBdev4_malloc 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.825 true 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.825 [2024-12-09 05:30:06.579131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:39:19.825 [2024-12-09 05:30:06.579498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:19.825 [2024-12-09 05:30:06.579537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:19.825 [2024-12-09 05:30:06.579555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:19.825 [2024-12-09 05:30:06.582609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:19.825 BaseBdev4 00:39:19.825 [2024-12-09 05:30:06.582820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.825 [2024-12-09 05:30:06.587219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:19.825 [2024-12-09 05:30:06.589680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:19.825 [2024-12-09 05:30:06.589958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:19.825 [2024-12-09 05:30:06.590078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:19.825 [2024-12-09 05:30:06.590410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:39:19.825 [2024-12-09 05:30:06.590435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:39:19.825 [2024-12-09 05:30:06.590770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:39:19.825 [2024-12-09 05:30:06.591049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:39:19.825 [2024-12-09 05:30:06.591066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:39:19.825 [2024-12-09 05:30:06.591296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:19.825 "name": "raid_bdev1", 00:39:19.825 "uuid": "7757d4e4-0ece-4140-a60f-d7c2fe507f35", 00:39:19.825 "strip_size_kb": 64, 00:39:19.825 "state": "online", 00:39:19.825 "raid_level": "concat", 00:39:19.825 "superblock": true, 00:39:19.825 "num_base_bdevs": 4, 00:39:19.825 "num_base_bdevs_discovered": 4, 00:39:19.825 "num_base_bdevs_operational": 4, 00:39:19.825 "base_bdevs_list": [ 00:39:19.825 { 00:39:19.825 "name": "BaseBdev1", 00:39:19.825 "uuid": "878c7ea4-8c54-553b-b443-369f0cb187c9", 00:39:19.825 "is_configured": true, 00:39:19.825 "data_offset": 2048, 00:39:19.825 "data_size": 63488 00:39:19.825 }, 00:39:19.825 { 00:39:19.825 "name": "BaseBdev2", 00:39:19.825 "uuid": "0da76925-6cb2-5914-9c10-d5993a93cf16", 00:39:19.825 "is_configured": true, 00:39:19.825 "data_offset": 2048, 00:39:19.825 "data_size": 63488 00:39:19.825 }, 00:39:19.825 { 00:39:19.825 "name": "BaseBdev3", 00:39:19.825 "uuid": "200077aa-beca-549b-9a0d-70b4fa048fb4", 00:39:19.825 "is_configured": true, 00:39:19.825 "data_offset": 2048, 00:39:19.825 "data_size": 63488 00:39:19.825 }, 00:39:19.825 { 00:39:19.825 "name": "BaseBdev4", 00:39:19.825 "uuid": "c33e1d3f-6824-5141-8b09-d906ee96cf43", 00:39:19.825 "is_configured": true, 00:39:19.825 "data_offset": 2048, 00:39:19.825 "data_size": 63488 00:39:19.825 } 00:39:19.825 ] 00:39:19.825 }' 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:19.825 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:20.486 05:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:39:20.486 05:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:20.486 [2024-12-09 05:30:07.289266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:21.421 "name": "raid_bdev1", 00:39:21.421 "uuid": "7757d4e4-0ece-4140-a60f-d7c2fe507f35", 00:39:21.421 "strip_size_kb": 64, 00:39:21.421 "state": "online", 00:39:21.421 "raid_level": "concat", 00:39:21.421 "superblock": true, 00:39:21.421 "num_base_bdevs": 4, 00:39:21.421 "num_base_bdevs_discovered": 4, 00:39:21.421 "num_base_bdevs_operational": 4, 00:39:21.421 "base_bdevs_list": [ 00:39:21.421 { 00:39:21.421 "name": "BaseBdev1", 00:39:21.421 "uuid": "878c7ea4-8c54-553b-b443-369f0cb187c9", 00:39:21.421 "is_configured": true, 00:39:21.421 "data_offset": 2048, 00:39:21.421 "data_size": 63488 00:39:21.421 }, 00:39:21.421 { 00:39:21.421 "name": "BaseBdev2", 00:39:21.421 "uuid": "0da76925-6cb2-5914-9c10-d5993a93cf16", 00:39:21.421 "is_configured": true, 00:39:21.421 "data_offset": 2048, 00:39:21.421 "data_size": 63488 00:39:21.421 }, 00:39:21.421 { 00:39:21.421 "name": "BaseBdev3", 00:39:21.421 "uuid": "200077aa-beca-549b-9a0d-70b4fa048fb4", 00:39:21.421 "is_configured": true, 00:39:21.421 "data_offset": 2048, 00:39:21.421 "data_size": 63488 00:39:21.421 }, 00:39:21.421 { 00:39:21.421 "name": "BaseBdev4", 00:39:21.421 "uuid": "c33e1d3f-6824-5141-8b09-d906ee96cf43", 00:39:21.421 "is_configured": true, 00:39:21.421 "data_offset": 2048, 00:39:21.421 "data_size": 63488 00:39:21.421 } 00:39:21.421 ] 00:39:21.421 }' 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:21.421 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:21.988 [2024-12-09 05:30:08.712873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:21.988 [2024-12-09 05:30:08.712941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:21.988 { 00:39:21.988 "results": [ 00:39:21.988 { 00:39:21.988 "job": "raid_bdev1", 00:39:21.988 "core_mask": "0x1", 00:39:21.988 "workload": "randrw", 00:39:21.988 "percentage": 50, 00:39:21.988 "status": "finished", 00:39:21.988 "queue_depth": 1, 00:39:21.988 "io_size": 131072, 00:39:21.988 "runtime": 1.420913, 00:39:21.988 "iops": 8483.278005057311, 00:39:21.988 "mibps": 1060.4097506321639, 00:39:21.988 "io_failed": 1, 00:39:21.988 "io_timeout": 0, 00:39:21.988 "avg_latency_us": 164.73138116963915, 00:39:21.988 "min_latency_us": 37.70181818181818, 00:39:21.988 "max_latency_us": 1951.1854545454546 00:39:21.988 } 00:39:21.988 ], 00:39:21.988 "core_count": 1 00:39:21.988 } 00:39:21.988 [2024-12-09 05:30:08.716567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:21.988 [2024-12-09 05:30:08.716651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:21.988 [2024-12-09 05:30:08.716748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:21.988 [2024-12-09 05:30:08.716768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73244 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73244 ']' 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73244 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73244 00:39:21.988 killing process with pid 73244 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73244' 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73244 00:39:21.988 [2024-12-09 05:30:08.753653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:21.988 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73244 00:39:22.247 [2024-12-09 05:30:09.115934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VEjOgjXTiX 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:39:24.147 00:39:24.147 real 0m5.366s 00:39:24.147 user 0m6.452s 00:39:24.147 sys 0m0.659s 00:39:24.147 ************************************ 00:39:24.147 END TEST raid_write_error_test 00:39:24.147 ************************************ 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.147 05:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:39:24.147 05:30:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:39:24.147 05:30:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:39:24.147 05:30:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:24.147 05:30:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.147 05:30:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:24.147 ************************************ 00:39:24.147 START TEST raid_state_function_test 00:39:24.147 ************************************ 00:39:24.147 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:39:24.147 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:39:24.147 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73402 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73402' 00:39:24.148 Process raid pid: 73402 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73402 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73402 ']' 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.148 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:24.148 [2024-12-09 05:30:10.781422] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:24.148 [2024-12-09 05:30:10.781877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.148 [2024-12-09 05:30:10.967156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.407 [2024-12-09 05:30:11.126987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.407 [2024-12-09 05:30:11.376098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:24.407 [2024-12-09 05:30:11.376208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:24.976 [2024-12-09 05:30:11.858002] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:24.976 [2024-12-09 05:30:11.858117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:24.976 [2024-12-09 05:30:11.858149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:24.976 [2024-12-09 05:30:11.858171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:24.976 [2024-12-09 05:30:11.858184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:24.976 [2024-12-09 05:30:11.858201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:24.976 [2024-12-09 05:30:11.858214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:24.976 [2024-12-09 05:30:11.858284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:24.976 "name": "Existed_Raid", 00:39:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.976 "strip_size_kb": 0, 00:39:24.976 "state": "configuring", 00:39:24.976 "raid_level": "raid1", 00:39:24.976 "superblock": false, 00:39:24.976 "num_base_bdevs": 4, 00:39:24.976 "num_base_bdevs_discovered": 0, 00:39:24.976 "num_base_bdevs_operational": 4, 00:39:24.976 "base_bdevs_list": [ 00:39:24.976 { 00:39:24.976 "name": "BaseBdev1", 00:39:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.976 "is_configured": false, 00:39:24.976 "data_offset": 0, 00:39:24.976 "data_size": 0 00:39:24.976 }, 00:39:24.976 { 00:39:24.976 "name": "BaseBdev2", 00:39:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.976 "is_configured": false, 00:39:24.976 "data_offset": 0, 00:39:24.976 "data_size": 0 00:39:24.976 }, 00:39:24.976 { 00:39:24.976 "name": "BaseBdev3", 00:39:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.976 "is_configured": false, 00:39:24.976 "data_offset": 0, 00:39:24.976 "data_size": 0 00:39:24.976 }, 00:39:24.976 { 00:39:24.976 "name": "BaseBdev4", 00:39:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.976 "is_configured": false, 00:39:24.976 "data_offset": 0, 00:39:24.976 "data_size": 0 00:39:24.976 } 00:39:24.976 ] 00:39:24.976 }' 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:24.976 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.547 [2024-12-09 05:30:12.410121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:25.547 [2024-12-09 05:30:12.410214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.547 [2024-12-09 05:30:12.418039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:25.547 [2024-12-09 05:30:12.418405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:25.547 [2024-12-09 05:30:12.418531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:25.547 [2024-12-09 05:30:12.418702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:25.547 [2024-12-09 05:30:12.418724] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:25.547 [2024-12-09 05:30:12.418740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:25.547 [2024-12-09 05:30:12.418749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:25.547 [2024-12-09 05:30:12.418812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.547 [2024-12-09 05:30:12.463168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:25.547 BaseBdev1 00:39:25.547 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.548 [ 00:39:25.548 { 00:39:25.548 "name": "BaseBdev1", 00:39:25.548 "aliases": [ 00:39:25.548 "55a03a38-e715-409f-83fd-c105d632c6ff" 00:39:25.548 ], 00:39:25.548 "product_name": "Malloc disk", 00:39:25.548 "block_size": 512, 00:39:25.548 "num_blocks": 65536, 00:39:25.548 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:25.548 "assigned_rate_limits": { 00:39:25.548 "rw_ios_per_sec": 0, 00:39:25.548 "rw_mbytes_per_sec": 0, 00:39:25.548 "r_mbytes_per_sec": 0, 00:39:25.548 "w_mbytes_per_sec": 0 00:39:25.548 }, 00:39:25.548 "claimed": true, 00:39:25.548 "claim_type": "exclusive_write", 00:39:25.548 "zoned": false, 00:39:25.548 "supported_io_types": { 00:39:25.548 "read": true, 00:39:25.548 "write": true, 00:39:25.548 "unmap": true, 00:39:25.548 "flush": true, 00:39:25.548 "reset": true, 00:39:25.548 "nvme_admin": false, 00:39:25.548 "nvme_io": false, 00:39:25.548 "nvme_io_md": false, 00:39:25.548 "write_zeroes": true, 00:39:25.548 "zcopy": true, 00:39:25.548 "get_zone_info": false, 00:39:25.548 "zone_management": false, 00:39:25.548 "zone_append": false, 00:39:25.548 "compare": false, 00:39:25.548 "compare_and_write": false, 00:39:25.548 "abort": true, 00:39:25.548 "seek_hole": false, 00:39:25.548 "seek_data": false, 00:39:25.548 "copy": true, 00:39:25.548 "nvme_iov_md": false 00:39:25.548 }, 00:39:25.548 "memory_domains": [ 00:39:25.548 { 00:39:25.548 "dma_device_id": "system", 00:39:25.548 "dma_device_type": 1 00:39:25.548 }, 00:39:25.548 { 00:39:25.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:25.548 "dma_device_type": 2 00:39:25.548 } 00:39:25.548 ], 00:39:25.548 "driver_specific": {} 00:39:25.548 } 00:39:25.548 ] 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:25.548 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.806 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:25.806 "name": "Existed_Raid", 00:39:25.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.807 "strip_size_kb": 0, 00:39:25.807 "state": "configuring", 00:39:25.807 "raid_level": "raid1", 00:39:25.807 "superblock": false, 00:39:25.807 "num_base_bdevs": 4, 00:39:25.807 "num_base_bdevs_discovered": 1, 00:39:25.807 "num_base_bdevs_operational": 4, 00:39:25.807 "base_bdevs_list": [ 00:39:25.807 { 00:39:25.807 "name": "BaseBdev1", 00:39:25.807 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:25.807 "is_configured": true, 00:39:25.807 "data_offset": 0, 00:39:25.807 "data_size": 65536 00:39:25.807 }, 00:39:25.807 { 00:39:25.807 "name": "BaseBdev2", 00:39:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.807 "is_configured": false, 00:39:25.807 "data_offset": 0, 00:39:25.807 "data_size": 0 00:39:25.807 }, 00:39:25.807 { 00:39:25.807 "name": "BaseBdev3", 00:39:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.807 "is_configured": false, 00:39:25.807 "data_offset": 0, 00:39:25.807 "data_size": 0 00:39:25.807 }, 00:39:25.807 { 00:39:25.807 "name": "BaseBdev4", 00:39:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.807 "is_configured": false, 00:39:25.807 "data_offset": 0, 00:39:25.807 "data_size": 0 00:39:25.807 } 00:39:25.807 ] 00:39:25.807 }' 00:39:25.807 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:25.807 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.065 [2024-12-09 05:30:13.027521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:26.065 [2024-12-09 05:30:13.027632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.065 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.065 [2024-12-09 05:30:13.035467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:26.323 [2024-12-09 05:30:13.038272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:26.323 [2024-12-09 05:30:13.038494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:26.323 [2024-12-09 05:30:13.038631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:26.323 [2024-12-09 05:30:13.038664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:26.323 [2024-12-09 05:30:13.038675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:26.323 [2024-12-09 05:30:13.038694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:26.323 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.323 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:26.323 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:26.323 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:26.324 "name": "Existed_Raid", 00:39:26.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.324 "strip_size_kb": 0, 00:39:26.324 "state": "configuring", 00:39:26.324 "raid_level": "raid1", 00:39:26.324 "superblock": false, 00:39:26.324 "num_base_bdevs": 4, 00:39:26.324 "num_base_bdevs_discovered": 1, 00:39:26.324 "num_base_bdevs_operational": 4, 00:39:26.324 "base_bdevs_list": [ 00:39:26.324 { 00:39:26.324 "name": "BaseBdev1", 00:39:26.324 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:26.324 "is_configured": true, 00:39:26.324 "data_offset": 0, 00:39:26.324 "data_size": 65536 00:39:26.324 }, 00:39:26.324 { 00:39:26.324 "name": "BaseBdev2", 00:39:26.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.324 "is_configured": false, 00:39:26.324 "data_offset": 0, 00:39:26.324 "data_size": 0 00:39:26.324 }, 00:39:26.324 { 00:39:26.324 "name": "BaseBdev3", 00:39:26.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.324 "is_configured": false, 00:39:26.324 "data_offset": 0, 00:39:26.324 "data_size": 0 00:39:26.324 }, 00:39:26.324 { 00:39:26.324 "name": "BaseBdev4", 00:39:26.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.324 "is_configured": false, 00:39:26.324 "data_offset": 0, 00:39:26.324 "data_size": 0 00:39:26.324 } 00:39:26.324 ] 00:39:26.324 }' 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:26.324 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.891 [2024-12-09 05:30:13.621313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:26.891 BaseBdev2 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.891 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.891 [ 00:39:26.891 { 00:39:26.891 "name": "BaseBdev2", 00:39:26.891 "aliases": [ 00:39:26.891 "d5def0b0-309e-471b-9bab-ca45135a87c4" 00:39:26.891 ], 00:39:26.891 "product_name": "Malloc disk", 00:39:26.891 "block_size": 512, 00:39:26.891 "num_blocks": 65536, 00:39:26.891 "uuid": "d5def0b0-309e-471b-9bab-ca45135a87c4", 00:39:26.892 "assigned_rate_limits": { 00:39:26.892 "rw_ios_per_sec": 0, 00:39:26.892 "rw_mbytes_per_sec": 0, 00:39:26.892 "r_mbytes_per_sec": 0, 00:39:26.892 "w_mbytes_per_sec": 0 00:39:26.892 }, 00:39:26.892 "claimed": true, 00:39:26.892 "claim_type": "exclusive_write", 00:39:26.892 "zoned": false, 00:39:26.892 "supported_io_types": { 00:39:26.892 "read": true, 00:39:26.892 "write": true, 00:39:26.892 "unmap": true, 00:39:26.892 "flush": true, 00:39:26.892 "reset": true, 00:39:26.892 "nvme_admin": false, 00:39:26.892 "nvme_io": false, 00:39:26.892 "nvme_io_md": false, 00:39:26.892 "write_zeroes": true, 00:39:26.892 "zcopy": true, 00:39:26.892 "get_zone_info": false, 00:39:26.892 "zone_management": false, 00:39:26.892 "zone_append": false, 00:39:26.892 "compare": false, 00:39:26.892 "compare_and_write": false, 00:39:26.892 "abort": true, 00:39:26.892 "seek_hole": false, 00:39:26.892 "seek_data": false, 00:39:26.892 "copy": true, 00:39:26.892 "nvme_iov_md": false 00:39:26.892 }, 00:39:26.892 "memory_domains": [ 00:39:26.892 { 00:39:26.892 "dma_device_id": "system", 00:39:26.892 "dma_device_type": 1 00:39:26.892 }, 00:39:26.892 { 00:39:26.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:26.892 "dma_device_type": 2 00:39:26.892 } 00:39:26.892 ], 00:39:26.892 "driver_specific": {} 00:39:26.892 } 00:39:26.892 ] 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:26.892 "name": "Existed_Raid", 00:39:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.892 "strip_size_kb": 0, 00:39:26.892 "state": "configuring", 00:39:26.892 "raid_level": "raid1", 00:39:26.892 "superblock": false, 00:39:26.892 "num_base_bdevs": 4, 00:39:26.892 "num_base_bdevs_discovered": 2, 00:39:26.892 "num_base_bdevs_operational": 4, 00:39:26.892 "base_bdevs_list": [ 00:39:26.892 { 00:39:26.892 "name": "BaseBdev1", 00:39:26.892 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:26.892 "is_configured": true, 00:39:26.892 "data_offset": 0, 00:39:26.892 "data_size": 65536 00:39:26.892 }, 00:39:26.892 { 00:39:26.892 "name": "BaseBdev2", 00:39:26.892 "uuid": "d5def0b0-309e-471b-9bab-ca45135a87c4", 00:39:26.892 "is_configured": true, 00:39:26.892 "data_offset": 0, 00:39:26.892 "data_size": 65536 00:39:26.892 }, 00:39:26.892 { 00:39:26.892 "name": "BaseBdev3", 00:39:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.892 "is_configured": false, 00:39:26.892 "data_offset": 0, 00:39:26.892 "data_size": 0 00:39:26.892 }, 00:39:26.892 { 00:39:26.892 "name": "BaseBdev4", 00:39:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.892 "is_configured": false, 00:39:26.892 "data_offset": 0, 00:39:26.892 "data_size": 0 00:39:26.892 } 00:39:26.892 ] 00:39:26.892 }' 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:26.892 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:27.460 [2024-12-09 05:30:14.245908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:27.460 BaseBdev3 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:27.460 [ 00:39:27.460 { 00:39:27.460 "name": "BaseBdev3", 00:39:27.460 "aliases": [ 00:39:27.460 "a4125ba7-06c2-4a4d-8606-b70d4b677297" 00:39:27.460 ], 00:39:27.460 "product_name": "Malloc disk", 00:39:27.460 "block_size": 512, 00:39:27.460 "num_blocks": 65536, 00:39:27.460 "uuid": "a4125ba7-06c2-4a4d-8606-b70d4b677297", 00:39:27.460 "assigned_rate_limits": { 00:39:27.460 "rw_ios_per_sec": 0, 00:39:27.460 "rw_mbytes_per_sec": 0, 00:39:27.460 "r_mbytes_per_sec": 0, 00:39:27.460 "w_mbytes_per_sec": 0 00:39:27.460 }, 00:39:27.460 "claimed": true, 00:39:27.460 "claim_type": "exclusive_write", 00:39:27.460 "zoned": false, 00:39:27.460 "supported_io_types": { 00:39:27.460 "read": true, 00:39:27.460 "write": true, 00:39:27.460 "unmap": true, 00:39:27.460 "flush": true, 00:39:27.460 "reset": true, 00:39:27.460 "nvme_admin": false, 00:39:27.460 "nvme_io": false, 00:39:27.460 "nvme_io_md": false, 00:39:27.460 "write_zeroes": true, 00:39:27.460 "zcopy": true, 00:39:27.460 "get_zone_info": false, 00:39:27.460 "zone_management": false, 00:39:27.460 "zone_append": false, 00:39:27.460 "compare": false, 00:39:27.460 "compare_and_write": false, 00:39:27.460 "abort": true, 00:39:27.460 "seek_hole": false, 00:39:27.460 "seek_data": false, 00:39:27.460 "copy": true, 00:39:27.460 "nvme_iov_md": false 00:39:27.460 }, 00:39:27.460 "memory_domains": [ 00:39:27.460 { 00:39:27.460 "dma_device_id": "system", 00:39:27.460 "dma_device_type": 1 00:39:27.460 }, 00:39:27.460 { 00:39:27.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:27.460 "dma_device_type": 2 00:39:27.460 } 00:39:27.460 ], 00:39:27.460 "driver_specific": {} 00:39:27.460 } 00:39:27.460 ] 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:27.460 "name": "Existed_Raid", 00:39:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.460 "strip_size_kb": 0, 00:39:27.460 "state": "configuring", 00:39:27.460 "raid_level": "raid1", 00:39:27.460 "superblock": false, 00:39:27.460 "num_base_bdevs": 4, 00:39:27.460 "num_base_bdevs_discovered": 3, 00:39:27.460 "num_base_bdevs_operational": 4, 00:39:27.460 "base_bdevs_list": [ 00:39:27.460 { 00:39:27.460 "name": "BaseBdev1", 00:39:27.460 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:27.460 "is_configured": true, 00:39:27.460 "data_offset": 0, 00:39:27.460 "data_size": 65536 00:39:27.460 }, 00:39:27.460 { 00:39:27.460 "name": "BaseBdev2", 00:39:27.460 "uuid": "d5def0b0-309e-471b-9bab-ca45135a87c4", 00:39:27.460 "is_configured": true, 00:39:27.460 "data_offset": 0, 00:39:27.460 "data_size": 65536 00:39:27.460 }, 00:39:27.460 { 00:39:27.460 "name": "BaseBdev3", 00:39:27.460 "uuid": "a4125ba7-06c2-4a4d-8606-b70d4b677297", 00:39:27.460 "is_configured": true, 00:39:27.460 "data_offset": 0, 00:39:27.460 "data_size": 65536 00:39:27.460 }, 00:39:27.460 { 00:39:27.460 "name": "BaseBdev4", 00:39:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.460 "is_configured": false, 00:39:27.460 "data_offset": 0, 00:39:27.460 "data_size": 0 00:39:27.460 } 00:39:27.460 ] 00:39:27.460 }' 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:27.460 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.027 [2024-12-09 05:30:14.881073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:28.027 [2024-12-09 05:30:14.881155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:28.027 [2024-12-09 05:30:14.881173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:39:28.027 [2024-12-09 05:30:14.881585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:28.027 [2024-12-09 05:30:14.881865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:28.027 [2024-12-09 05:30:14.881906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:39:28.027 [2024-12-09 05:30:14.882304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:28.027 BaseBdev4 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.027 [ 00:39:28.027 { 00:39:28.027 "name": "BaseBdev4", 00:39:28.027 "aliases": [ 00:39:28.027 "01ec8168-1dcb-4e72-aa4f-14cdcf7d53c9" 00:39:28.027 ], 00:39:28.027 "product_name": "Malloc disk", 00:39:28.027 "block_size": 512, 00:39:28.027 "num_blocks": 65536, 00:39:28.027 "uuid": "01ec8168-1dcb-4e72-aa4f-14cdcf7d53c9", 00:39:28.027 "assigned_rate_limits": { 00:39:28.027 "rw_ios_per_sec": 0, 00:39:28.027 "rw_mbytes_per_sec": 0, 00:39:28.027 "r_mbytes_per_sec": 0, 00:39:28.027 "w_mbytes_per_sec": 0 00:39:28.027 }, 00:39:28.027 "claimed": true, 00:39:28.027 "claim_type": "exclusive_write", 00:39:28.027 "zoned": false, 00:39:28.027 "supported_io_types": { 00:39:28.027 "read": true, 00:39:28.027 "write": true, 00:39:28.027 "unmap": true, 00:39:28.027 "flush": true, 00:39:28.027 "reset": true, 00:39:28.027 "nvme_admin": false, 00:39:28.027 "nvme_io": false, 00:39:28.027 "nvme_io_md": false, 00:39:28.027 "write_zeroes": true, 00:39:28.027 "zcopy": true, 00:39:28.027 "get_zone_info": false, 00:39:28.027 "zone_management": false, 00:39:28.027 "zone_append": false, 00:39:28.027 "compare": false, 00:39:28.027 "compare_and_write": false, 00:39:28.027 "abort": true, 00:39:28.027 "seek_hole": false, 00:39:28.027 "seek_data": false, 00:39:28.027 "copy": true, 00:39:28.027 "nvme_iov_md": false 00:39:28.027 }, 00:39:28.027 "memory_domains": [ 00:39:28.027 { 00:39:28.027 "dma_device_id": "system", 00:39:28.027 "dma_device_type": 1 00:39:28.027 }, 00:39:28.027 { 00:39:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:28.027 "dma_device_type": 2 00:39:28.027 } 00:39:28.027 ], 00:39:28.027 "driver_specific": {} 00:39:28.027 } 00:39:28.027 ] 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.027 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:28.027 "name": "Existed_Raid", 00:39:28.027 "uuid": "f752c039-2925-4740-bf2d-4ab0f2be51fb", 00:39:28.027 "strip_size_kb": 0, 00:39:28.027 "state": "online", 00:39:28.027 "raid_level": "raid1", 00:39:28.027 "superblock": false, 00:39:28.027 "num_base_bdevs": 4, 00:39:28.027 "num_base_bdevs_discovered": 4, 00:39:28.027 "num_base_bdevs_operational": 4, 00:39:28.027 "base_bdevs_list": [ 00:39:28.027 { 00:39:28.027 "name": "BaseBdev1", 00:39:28.027 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:28.027 "is_configured": true, 00:39:28.027 "data_offset": 0, 00:39:28.027 "data_size": 65536 00:39:28.028 }, 00:39:28.028 { 00:39:28.028 "name": "BaseBdev2", 00:39:28.028 "uuid": "d5def0b0-309e-471b-9bab-ca45135a87c4", 00:39:28.028 "is_configured": true, 00:39:28.028 "data_offset": 0, 00:39:28.028 "data_size": 65536 00:39:28.028 }, 00:39:28.028 { 00:39:28.028 "name": "BaseBdev3", 00:39:28.028 "uuid": "a4125ba7-06c2-4a4d-8606-b70d4b677297", 00:39:28.028 "is_configured": true, 00:39:28.028 "data_offset": 0, 00:39:28.028 "data_size": 65536 00:39:28.028 }, 00:39:28.028 { 00:39:28.028 "name": "BaseBdev4", 00:39:28.028 "uuid": "01ec8168-1dcb-4e72-aa4f-14cdcf7d53c9", 00:39:28.028 "is_configured": true, 00:39:28.028 "data_offset": 0, 00:39:28.028 "data_size": 65536 00:39:28.028 } 00:39:28.028 ] 00:39:28.028 }' 00:39:28.028 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:28.028 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.606 [2024-12-09 05:30:15.465788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:28.606 "name": "Existed_Raid", 00:39:28.606 "aliases": [ 00:39:28.606 "f752c039-2925-4740-bf2d-4ab0f2be51fb" 00:39:28.606 ], 00:39:28.606 "product_name": "Raid Volume", 00:39:28.606 "block_size": 512, 00:39:28.606 "num_blocks": 65536, 00:39:28.606 "uuid": "f752c039-2925-4740-bf2d-4ab0f2be51fb", 00:39:28.606 "assigned_rate_limits": { 00:39:28.606 "rw_ios_per_sec": 0, 00:39:28.606 "rw_mbytes_per_sec": 0, 00:39:28.606 "r_mbytes_per_sec": 0, 00:39:28.606 "w_mbytes_per_sec": 0 00:39:28.606 }, 00:39:28.606 "claimed": false, 00:39:28.606 "zoned": false, 00:39:28.606 "supported_io_types": { 00:39:28.606 "read": true, 00:39:28.606 "write": true, 00:39:28.606 "unmap": false, 00:39:28.606 "flush": false, 00:39:28.606 "reset": true, 00:39:28.606 "nvme_admin": false, 00:39:28.606 "nvme_io": false, 00:39:28.606 "nvme_io_md": false, 00:39:28.606 "write_zeroes": true, 00:39:28.606 "zcopy": false, 00:39:28.606 "get_zone_info": false, 00:39:28.606 "zone_management": false, 00:39:28.606 "zone_append": false, 00:39:28.606 "compare": false, 00:39:28.606 "compare_and_write": false, 00:39:28.606 "abort": false, 00:39:28.606 "seek_hole": false, 00:39:28.606 "seek_data": false, 00:39:28.606 "copy": false, 00:39:28.606 "nvme_iov_md": false 00:39:28.606 }, 00:39:28.606 "memory_domains": [ 00:39:28.606 { 00:39:28.606 "dma_device_id": "system", 00:39:28.606 "dma_device_type": 1 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:28.606 "dma_device_type": 2 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "system", 00:39:28.606 "dma_device_type": 1 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:28.606 "dma_device_type": 2 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "system", 00:39:28.606 "dma_device_type": 1 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:28.606 "dma_device_type": 2 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "system", 00:39:28.606 "dma_device_type": 1 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:28.606 "dma_device_type": 2 00:39:28.606 } 00:39:28.606 ], 00:39:28.606 "driver_specific": { 00:39:28.606 "raid": { 00:39:28.606 "uuid": "f752c039-2925-4740-bf2d-4ab0f2be51fb", 00:39:28.606 "strip_size_kb": 0, 00:39:28.606 "state": "online", 00:39:28.606 "raid_level": "raid1", 00:39:28.606 "superblock": false, 00:39:28.606 "num_base_bdevs": 4, 00:39:28.606 "num_base_bdevs_discovered": 4, 00:39:28.606 "num_base_bdevs_operational": 4, 00:39:28.606 "base_bdevs_list": [ 00:39:28.606 { 00:39:28.606 "name": "BaseBdev1", 00:39:28.606 "uuid": "55a03a38-e715-409f-83fd-c105d632c6ff", 00:39:28.606 "is_configured": true, 00:39:28.606 "data_offset": 0, 00:39:28.606 "data_size": 65536 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "name": "BaseBdev2", 00:39:28.606 "uuid": "d5def0b0-309e-471b-9bab-ca45135a87c4", 00:39:28.606 "is_configured": true, 00:39:28.606 "data_offset": 0, 00:39:28.606 "data_size": 65536 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "name": "BaseBdev3", 00:39:28.606 "uuid": "a4125ba7-06c2-4a4d-8606-b70d4b677297", 00:39:28.606 "is_configured": true, 00:39:28.606 "data_offset": 0, 00:39:28.606 "data_size": 65536 00:39:28.606 }, 00:39:28.606 { 00:39:28.606 "name": "BaseBdev4", 00:39:28.606 "uuid": "01ec8168-1dcb-4e72-aa4f-14cdcf7d53c9", 00:39:28.606 "is_configured": true, 00:39:28.606 "data_offset": 0, 00:39:28.606 "data_size": 65536 00:39:28.606 } 00:39:28.606 ] 00:39:28.606 } 00:39:28.606 } 00:39:28.606 }' 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:28.606 BaseBdev2 00:39:28.606 BaseBdev3 00:39:28.606 BaseBdev4' 00:39:28.606 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.864 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.121 [2024-12-09 05:30:15.857541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.121 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.121 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:29.121 "name": "Existed_Raid", 00:39:29.121 "uuid": "f752c039-2925-4740-bf2d-4ab0f2be51fb", 00:39:29.121 "strip_size_kb": 0, 00:39:29.121 "state": "online", 00:39:29.121 "raid_level": "raid1", 00:39:29.121 "superblock": false, 00:39:29.121 "num_base_bdevs": 4, 00:39:29.121 "num_base_bdevs_discovered": 3, 00:39:29.121 "num_base_bdevs_operational": 3, 00:39:29.122 "base_bdevs_list": [ 00:39:29.122 { 00:39:29.122 "name": null, 00:39:29.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.122 "is_configured": false, 00:39:29.122 "data_offset": 0, 00:39:29.122 "data_size": 65536 00:39:29.122 }, 00:39:29.122 { 00:39:29.122 "name": "BaseBdev2", 00:39:29.122 "uuid": "d5def0b0-309e-471b-9bab-ca45135a87c4", 00:39:29.122 "is_configured": true, 00:39:29.122 "data_offset": 0, 00:39:29.122 "data_size": 65536 00:39:29.122 }, 00:39:29.122 { 00:39:29.122 "name": "BaseBdev3", 00:39:29.122 "uuid": "a4125ba7-06c2-4a4d-8606-b70d4b677297", 00:39:29.122 "is_configured": true, 00:39:29.122 "data_offset": 0, 00:39:29.122 "data_size": 65536 00:39:29.122 }, 00:39:29.122 { 00:39:29.122 "name": "BaseBdev4", 00:39:29.122 "uuid": "01ec8168-1dcb-4e72-aa4f-14cdcf7d53c9", 00:39:29.122 "is_configured": true, 00:39:29.122 "data_offset": 0, 00:39:29.122 "data_size": 65536 00:39:29.122 } 00:39:29.122 ] 00:39:29.122 }' 00:39:29.122 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:29.122 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.687 [2024-12-09 05:30:16.534666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.687 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.945 [2024-12-09 05:30:16.689998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.945 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.945 [2024-12-09 05:30:16.847062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:39:29.945 [2024-12-09 05:30:16.847184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:30.204 [2024-12-09 05:30:16.938472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:30.204 [2024-12-09 05:30:16.938542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:30.204 [2024-12-09 05:30:16.938562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 BaseBdev2 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 [ 00:39:30.204 { 00:39:30.204 "name": "BaseBdev2", 00:39:30.204 "aliases": [ 00:39:30.204 "0b6e644c-bd59-4a28-b26b-ab0a8374661f" 00:39:30.204 ], 00:39:30.204 "product_name": "Malloc disk", 00:39:30.204 "block_size": 512, 00:39:30.204 "num_blocks": 65536, 00:39:30.204 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:30.204 "assigned_rate_limits": { 00:39:30.204 "rw_ios_per_sec": 0, 00:39:30.204 "rw_mbytes_per_sec": 0, 00:39:30.204 "r_mbytes_per_sec": 0, 00:39:30.204 "w_mbytes_per_sec": 0 00:39:30.204 }, 00:39:30.204 "claimed": false, 00:39:30.204 "zoned": false, 00:39:30.204 "supported_io_types": { 00:39:30.204 "read": true, 00:39:30.204 "write": true, 00:39:30.204 "unmap": true, 00:39:30.204 "flush": true, 00:39:30.204 "reset": true, 00:39:30.204 "nvme_admin": false, 00:39:30.204 "nvme_io": false, 00:39:30.204 "nvme_io_md": false, 00:39:30.204 "write_zeroes": true, 00:39:30.204 "zcopy": true, 00:39:30.204 "get_zone_info": false, 00:39:30.204 "zone_management": false, 00:39:30.204 "zone_append": false, 00:39:30.204 "compare": false, 00:39:30.204 "compare_and_write": false, 00:39:30.204 "abort": true, 00:39:30.204 "seek_hole": false, 00:39:30.204 "seek_data": false, 00:39:30.204 "copy": true, 00:39:30.204 "nvme_iov_md": false 00:39:30.204 }, 00:39:30.204 "memory_domains": [ 00:39:30.204 { 00:39:30.204 "dma_device_id": "system", 00:39:30.204 "dma_device_type": 1 00:39:30.204 }, 00:39:30.204 { 00:39:30.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:30.204 "dma_device_type": 2 00:39:30.204 } 00:39:30.204 ], 00:39:30.204 "driver_specific": {} 00:39:30.204 } 00:39:30.204 ] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 BaseBdev3 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.204 [ 00:39:30.204 { 00:39:30.204 "name": "BaseBdev3", 00:39:30.204 "aliases": [ 00:39:30.204 "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2" 00:39:30.204 ], 00:39:30.204 "product_name": "Malloc disk", 00:39:30.204 "block_size": 512, 00:39:30.204 "num_blocks": 65536, 00:39:30.204 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:30.204 "assigned_rate_limits": { 00:39:30.204 "rw_ios_per_sec": 0, 00:39:30.204 "rw_mbytes_per_sec": 0, 00:39:30.204 "r_mbytes_per_sec": 0, 00:39:30.204 "w_mbytes_per_sec": 0 00:39:30.204 }, 00:39:30.204 "claimed": false, 00:39:30.204 "zoned": false, 00:39:30.204 "supported_io_types": { 00:39:30.204 "read": true, 00:39:30.204 "write": true, 00:39:30.204 "unmap": true, 00:39:30.204 "flush": true, 00:39:30.204 "reset": true, 00:39:30.204 "nvme_admin": false, 00:39:30.204 "nvme_io": false, 00:39:30.204 "nvme_io_md": false, 00:39:30.204 "write_zeroes": true, 00:39:30.204 "zcopy": true, 00:39:30.204 "get_zone_info": false, 00:39:30.204 "zone_management": false, 00:39:30.204 "zone_append": false, 00:39:30.204 "compare": false, 00:39:30.204 "compare_and_write": false, 00:39:30.204 "abort": true, 00:39:30.204 "seek_hole": false, 00:39:30.204 "seek_data": false, 00:39:30.204 "copy": true, 00:39:30.204 "nvme_iov_md": false 00:39:30.204 }, 00:39:30.204 "memory_domains": [ 00:39:30.204 { 00:39:30.204 "dma_device_id": "system", 00:39:30.204 "dma_device_type": 1 00:39:30.204 }, 00:39:30.204 { 00:39:30.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:30.204 "dma_device_type": 2 00:39:30.204 } 00:39:30.204 ], 00:39:30.204 "driver_specific": {} 00:39:30.204 } 00:39:30.204 ] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:30.204 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:30.205 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:39:30.205 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.205 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.463 BaseBdev4 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.463 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.463 [ 00:39:30.463 { 00:39:30.463 "name": "BaseBdev4", 00:39:30.463 "aliases": [ 00:39:30.463 "c7d0132e-e21c-4edd-af08-439069b7adf4" 00:39:30.463 ], 00:39:30.463 "product_name": "Malloc disk", 00:39:30.463 "block_size": 512, 00:39:30.463 "num_blocks": 65536, 00:39:30.464 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:30.464 "assigned_rate_limits": { 00:39:30.464 "rw_ios_per_sec": 0, 00:39:30.464 "rw_mbytes_per_sec": 0, 00:39:30.464 "r_mbytes_per_sec": 0, 00:39:30.464 "w_mbytes_per_sec": 0 00:39:30.464 }, 00:39:30.464 "claimed": false, 00:39:30.464 "zoned": false, 00:39:30.464 "supported_io_types": { 00:39:30.464 "read": true, 00:39:30.464 "write": true, 00:39:30.464 "unmap": true, 00:39:30.464 "flush": true, 00:39:30.464 "reset": true, 00:39:30.464 "nvme_admin": false, 00:39:30.464 "nvme_io": false, 00:39:30.464 "nvme_io_md": false, 00:39:30.464 "write_zeroes": true, 00:39:30.464 "zcopy": true, 00:39:30.464 "get_zone_info": false, 00:39:30.464 "zone_management": false, 00:39:30.464 "zone_append": false, 00:39:30.464 "compare": false, 00:39:30.464 "compare_and_write": false, 00:39:30.464 "abort": true, 00:39:30.464 "seek_hole": false, 00:39:30.464 "seek_data": false, 00:39:30.464 "copy": true, 00:39:30.464 "nvme_iov_md": false 00:39:30.464 }, 00:39:30.464 "memory_domains": [ 00:39:30.464 { 00:39:30.464 "dma_device_id": "system", 00:39:30.464 "dma_device_type": 1 00:39:30.464 }, 00:39:30.464 { 00:39:30.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:30.464 "dma_device_type": 2 00:39:30.464 } 00:39:30.464 ], 00:39:30.464 "driver_specific": {} 00:39:30.464 } 00:39:30.464 ] 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.464 [2024-12-09 05:30:17.251957] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:30.464 [2024-12-09 05:30:17.252144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:30.464 [2024-12-09 05:30:17.252184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:30.464 [2024-12-09 05:30:17.254853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:30.464 [2024-12-09 05:30:17.254918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:30.464 "name": "Existed_Raid", 00:39:30.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.464 "strip_size_kb": 0, 00:39:30.464 "state": "configuring", 00:39:30.464 "raid_level": "raid1", 00:39:30.464 "superblock": false, 00:39:30.464 "num_base_bdevs": 4, 00:39:30.464 "num_base_bdevs_discovered": 3, 00:39:30.464 "num_base_bdevs_operational": 4, 00:39:30.464 "base_bdevs_list": [ 00:39:30.464 { 00:39:30.464 "name": "BaseBdev1", 00:39:30.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.464 "is_configured": false, 00:39:30.464 "data_offset": 0, 00:39:30.464 "data_size": 0 00:39:30.464 }, 00:39:30.464 { 00:39:30.464 "name": "BaseBdev2", 00:39:30.464 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:30.464 "is_configured": true, 00:39:30.464 "data_offset": 0, 00:39:30.464 "data_size": 65536 00:39:30.464 }, 00:39:30.464 { 00:39:30.464 "name": "BaseBdev3", 00:39:30.464 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:30.464 "is_configured": true, 00:39:30.464 "data_offset": 0, 00:39:30.464 "data_size": 65536 00:39:30.464 }, 00:39:30.464 { 00:39:30.464 "name": "BaseBdev4", 00:39:30.464 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:30.464 "is_configured": true, 00:39:30.464 "data_offset": 0, 00:39:30.464 "data_size": 65536 00:39:30.464 } 00:39:30.464 ] 00:39:30.464 }' 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:30.464 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.054 [2024-12-09 05:30:17.788245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:31.054 "name": "Existed_Raid", 00:39:31.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.054 "strip_size_kb": 0, 00:39:31.054 "state": "configuring", 00:39:31.054 "raid_level": "raid1", 00:39:31.054 "superblock": false, 00:39:31.054 "num_base_bdevs": 4, 00:39:31.054 "num_base_bdevs_discovered": 2, 00:39:31.054 "num_base_bdevs_operational": 4, 00:39:31.054 "base_bdevs_list": [ 00:39:31.054 { 00:39:31.054 "name": "BaseBdev1", 00:39:31.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.054 "is_configured": false, 00:39:31.054 "data_offset": 0, 00:39:31.054 "data_size": 0 00:39:31.054 }, 00:39:31.054 { 00:39:31.054 "name": null, 00:39:31.054 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:31.054 "is_configured": false, 00:39:31.054 "data_offset": 0, 00:39:31.054 "data_size": 65536 00:39:31.054 }, 00:39:31.054 { 00:39:31.054 "name": "BaseBdev3", 00:39:31.054 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:31.054 "is_configured": true, 00:39:31.054 "data_offset": 0, 00:39:31.054 "data_size": 65536 00:39:31.054 }, 00:39:31.054 { 00:39:31.054 "name": "BaseBdev4", 00:39:31.054 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:31.054 "is_configured": true, 00:39:31.054 "data_offset": 0, 00:39:31.054 "data_size": 65536 00:39:31.054 } 00:39:31.054 ] 00:39:31.054 }' 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:31.054 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.623 BaseBdev1 00:39:31.623 [2024-12-09 05:30:18.442146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.623 [ 00:39:31.623 { 00:39:31.623 "name": "BaseBdev1", 00:39:31.623 "aliases": [ 00:39:31.623 "213b1821-1e75-46c9-bda9-95056cdc2221" 00:39:31.623 ], 00:39:31.623 "product_name": "Malloc disk", 00:39:31.623 "block_size": 512, 00:39:31.623 "num_blocks": 65536, 00:39:31.623 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:31.623 "assigned_rate_limits": { 00:39:31.623 "rw_ios_per_sec": 0, 00:39:31.623 "rw_mbytes_per_sec": 0, 00:39:31.623 "r_mbytes_per_sec": 0, 00:39:31.623 "w_mbytes_per_sec": 0 00:39:31.623 }, 00:39:31.623 "claimed": true, 00:39:31.623 "claim_type": "exclusive_write", 00:39:31.623 "zoned": false, 00:39:31.623 "supported_io_types": { 00:39:31.623 "read": true, 00:39:31.623 "write": true, 00:39:31.623 "unmap": true, 00:39:31.623 "flush": true, 00:39:31.623 "reset": true, 00:39:31.623 "nvme_admin": false, 00:39:31.623 "nvme_io": false, 00:39:31.623 "nvme_io_md": false, 00:39:31.623 "write_zeroes": true, 00:39:31.623 "zcopy": true, 00:39:31.623 "get_zone_info": false, 00:39:31.623 "zone_management": false, 00:39:31.623 "zone_append": false, 00:39:31.623 "compare": false, 00:39:31.623 "compare_and_write": false, 00:39:31.623 "abort": true, 00:39:31.623 "seek_hole": false, 00:39:31.623 "seek_data": false, 00:39:31.623 "copy": true, 00:39:31.623 "nvme_iov_md": false 00:39:31.623 }, 00:39:31.623 "memory_domains": [ 00:39:31.623 { 00:39:31.623 "dma_device_id": "system", 00:39:31.623 "dma_device_type": 1 00:39:31.623 }, 00:39:31.623 { 00:39:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:31.623 "dma_device_type": 2 00:39:31.623 } 00:39:31.623 ], 00:39:31.623 "driver_specific": {} 00:39:31.623 } 00:39:31.623 ] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:31.623 "name": "Existed_Raid", 00:39:31.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.623 "strip_size_kb": 0, 00:39:31.623 "state": "configuring", 00:39:31.623 "raid_level": "raid1", 00:39:31.623 "superblock": false, 00:39:31.623 "num_base_bdevs": 4, 00:39:31.623 "num_base_bdevs_discovered": 3, 00:39:31.623 "num_base_bdevs_operational": 4, 00:39:31.623 "base_bdevs_list": [ 00:39:31.623 { 00:39:31.623 "name": "BaseBdev1", 00:39:31.623 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:31.623 "is_configured": true, 00:39:31.623 "data_offset": 0, 00:39:31.623 "data_size": 65536 00:39:31.623 }, 00:39:31.623 { 00:39:31.623 "name": null, 00:39:31.623 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:31.623 "is_configured": false, 00:39:31.623 "data_offset": 0, 00:39:31.623 "data_size": 65536 00:39:31.623 }, 00:39:31.623 { 00:39:31.623 "name": "BaseBdev3", 00:39:31.623 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:31.623 "is_configured": true, 00:39:31.623 "data_offset": 0, 00:39:31.623 "data_size": 65536 00:39:31.623 }, 00:39:31.623 { 00:39:31.623 "name": "BaseBdev4", 00:39:31.623 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:31.623 "is_configured": true, 00:39:31.623 "data_offset": 0, 00:39:31.623 "data_size": 65536 00:39:31.623 } 00:39:31.623 ] 00:39:31.623 }' 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:31.623 05:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.201 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:32.201 05:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.201 [2024-12-09 05:30:19.058535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:32.201 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:32.202 "name": "Existed_Raid", 00:39:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.202 "strip_size_kb": 0, 00:39:32.202 "state": "configuring", 00:39:32.202 "raid_level": "raid1", 00:39:32.202 "superblock": false, 00:39:32.202 "num_base_bdevs": 4, 00:39:32.202 "num_base_bdevs_discovered": 2, 00:39:32.202 "num_base_bdevs_operational": 4, 00:39:32.202 "base_bdevs_list": [ 00:39:32.202 { 00:39:32.202 "name": "BaseBdev1", 00:39:32.202 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:32.202 "is_configured": true, 00:39:32.202 "data_offset": 0, 00:39:32.202 "data_size": 65536 00:39:32.202 }, 00:39:32.202 { 00:39:32.202 "name": null, 00:39:32.202 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:32.202 "is_configured": false, 00:39:32.202 "data_offset": 0, 00:39:32.202 "data_size": 65536 00:39:32.202 }, 00:39:32.202 { 00:39:32.202 "name": null, 00:39:32.202 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:32.202 "is_configured": false, 00:39:32.202 "data_offset": 0, 00:39:32.202 "data_size": 65536 00:39:32.202 }, 00:39:32.202 { 00:39:32.202 "name": "BaseBdev4", 00:39:32.202 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:32.202 "is_configured": true, 00:39:32.202 "data_offset": 0, 00:39:32.202 "data_size": 65536 00:39:32.202 } 00:39:32.202 ] 00:39:32.202 }' 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:32.202 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.772 [2024-12-09 05:30:19.662651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:32.772 "name": "Existed_Raid", 00:39:32.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.772 "strip_size_kb": 0, 00:39:32.772 "state": "configuring", 00:39:32.772 "raid_level": "raid1", 00:39:32.772 "superblock": false, 00:39:32.772 "num_base_bdevs": 4, 00:39:32.772 "num_base_bdevs_discovered": 3, 00:39:32.772 "num_base_bdevs_operational": 4, 00:39:32.772 "base_bdevs_list": [ 00:39:32.772 { 00:39:32.772 "name": "BaseBdev1", 00:39:32.772 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:32.772 "is_configured": true, 00:39:32.772 "data_offset": 0, 00:39:32.772 "data_size": 65536 00:39:32.772 }, 00:39:32.772 { 00:39:32.772 "name": null, 00:39:32.772 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:32.772 "is_configured": false, 00:39:32.772 "data_offset": 0, 00:39:32.772 "data_size": 65536 00:39:32.772 }, 00:39:32.772 { 00:39:32.772 "name": "BaseBdev3", 00:39:32.772 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:32.772 "is_configured": true, 00:39:32.772 "data_offset": 0, 00:39:32.772 "data_size": 65536 00:39:32.772 }, 00:39:32.772 { 00:39:32.772 "name": "BaseBdev4", 00:39:32.772 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:32.772 "is_configured": true, 00:39:32.772 "data_offset": 0, 00:39:32.772 "data_size": 65536 00:39:32.772 } 00:39:32.772 ] 00:39:32.772 }' 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:32.772 05:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.340 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.340 [2024-12-09 05:30:20.259078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:33.599 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.599 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:33.600 "name": "Existed_Raid", 00:39:33.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:33.600 "strip_size_kb": 0, 00:39:33.600 "state": "configuring", 00:39:33.600 "raid_level": "raid1", 00:39:33.600 "superblock": false, 00:39:33.600 "num_base_bdevs": 4, 00:39:33.600 "num_base_bdevs_discovered": 2, 00:39:33.600 "num_base_bdevs_operational": 4, 00:39:33.600 "base_bdevs_list": [ 00:39:33.600 { 00:39:33.600 "name": null, 00:39:33.600 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:33.600 "is_configured": false, 00:39:33.600 "data_offset": 0, 00:39:33.600 "data_size": 65536 00:39:33.600 }, 00:39:33.600 { 00:39:33.600 "name": null, 00:39:33.600 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:33.600 "is_configured": false, 00:39:33.600 "data_offset": 0, 00:39:33.600 "data_size": 65536 00:39:33.600 }, 00:39:33.600 { 00:39:33.600 "name": "BaseBdev3", 00:39:33.600 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:33.600 "is_configured": true, 00:39:33.600 "data_offset": 0, 00:39:33.600 "data_size": 65536 00:39:33.600 }, 00:39:33.600 { 00:39:33.600 "name": "BaseBdev4", 00:39:33.600 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:33.600 "is_configured": true, 00:39:33.600 "data_offset": 0, 00:39:33.600 "data_size": 65536 00:39:33.600 } 00:39:33.600 ] 00:39:33.600 }' 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:33.600 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.168 [2024-12-09 05:30:20.935665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:34.168 "name": "Existed_Raid", 00:39:34.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:34.168 "strip_size_kb": 0, 00:39:34.168 "state": "configuring", 00:39:34.168 "raid_level": "raid1", 00:39:34.168 "superblock": false, 00:39:34.168 "num_base_bdevs": 4, 00:39:34.168 "num_base_bdevs_discovered": 3, 00:39:34.168 "num_base_bdevs_operational": 4, 00:39:34.168 "base_bdevs_list": [ 00:39:34.168 { 00:39:34.168 "name": null, 00:39:34.168 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:34.168 "is_configured": false, 00:39:34.168 "data_offset": 0, 00:39:34.168 "data_size": 65536 00:39:34.168 }, 00:39:34.168 { 00:39:34.168 "name": "BaseBdev2", 00:39:34.168 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:34.168 "is_configured": true, 00:39:34.168 "data_offset": 0, 00:39:34.168 "data_size": 65536 00:39:34.168 }, 00:39:34.168 { 00:39:34.168 "name": "BaseBdev3", 00:39:34.168 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:34.168 "is_configured": true, 00:39:34.168 "data_offset": 0, 00:39:34.168 "data_size": 65536 00:39:34.168 }, 00:39:34.168 { 00:39:34.168 "name": "BaseBdev4", 00:39:34.168 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:34.168 "is_configured": true, 00:39:34.168 "data_offset": 0, 00:39:34.168 "data_size": 65536 00:39:34.168 } 00:39:34.168 ] 00:39:34.168 }' 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:34.168 05:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.735 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.735 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 213b1821-1e75-46c9-bda9-95056cdc2221 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 [2024-12-09 05:30:21.628438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:39:34.736 NewBaseBdev 00:39:34.736 [2024-12-09 05:30:21.628698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:34.736 [2024-12-09 05:30:21.628728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:39:34.736 [2024-12-09 05:30:21.629108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:39:34.736 [2024-12-09 05:30:21.629376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:34.736 [2024-12-09 05:30:21.629406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:39:34.736 [2024-12-09 05:30:21.629748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 [ 00:39:34.736 { 00:39:34.736 "name": "NewBaseBdev", 00:39:34.736 "aliases": [ 00:39:34.736 "213b1821-1e75-46c9-bda9-95056cdc2221" 00:39:34.736 ], 00:39:34.736 "product_name": "Malloc disk", 00:39:34.736 "block_size": 512, 00:39:34.736 "num_blocks": 65536, 00:39:34.736 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:34.736 "assigned_rate_limits": { 00:39:34.736 "rw_ios_per_sec": 0, 00:39:34.736 "rw_mbytes_per_sec": 0, 00:39:34.736 "r_mbytes_per_sec": 0, 00:39:34.736 "w_mbytes_per_sec": 0 00:39:34.736 }, 00:39:34.736 "claimed": true, 00:39:34.736 "claim_type": "exclusive_write", 00:39:34.736 "zoned": false, 00:39:34.736 "supported_io_types": { 00:39:34.736 "read": true, 00:39:34.736 "write": true, 00:39:34.736 "unmap": true, 00:39:34.736 "flush": true, 00:39:34.736 "reset": true, 00:39:34.736 "nvme_admin": false, 00:39:34.736 "nvme_io": false, 00:39:34.736 "nvme_io_md": false, 00:39:34.736 "write_zeroes": true, 00:39:34.736 "zcopy": true, 00:39:34.736 "get_zone_info": false, 00:39:34.736 "zone_management": false, 00:39:34.736 "zone_append": false, 00:39:34.736 "compare": false, 00:39:34.736 "compare_and_write": false, 00:39:34.736 "abort": true, 00:39:34.736 "seek_hole": false, 00:39:34.736 "seek_data": false, 00:39:34.736 "copy": true, 00:39:34.736 "nvme_iov_md": false 00:39:34.736 }, 00:39:34.736 "memory_domains": [ 00:39:34.736 { 00:39:34.736 "dma_device_id": "system", 00:39:34.736 "dma_device_type": 1 00:39:34.736 }, 00:39:34.736 { 00:39:34.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:34.736 "dma_device_type": 2 00:39:34.736 } 00:39:34.736 ], 00:39:34.736 "driver_specific": {} 00:39:34.736 } 00:39:34.736 ] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.995 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:34.995 "name": "Existed_Raid", 00:39:34.995 "uuid": "08e5e332-17c4-4444-b14e-1229f42eca09", 00:39:34.995 "strip_size_kb": 0, 00:39:34.995 "state": "online", 00:39:34.995 "raid_level": "raid1", 00:39:34.995 "superblock": false, 00:39:34.995 "num_base_bdevs": 4, 00:39:34.995 "num_base_bdevs_discovered": 4, 00:39:34.995 "num_base_bdevs_operational": 4, 00:39:34.995 "base_bdevs_list": [ 00:39:34.995 { 00:39:34.995 "name": "NewBaseBdev", 00:39:34.995 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:34.995 "is_configured": true, 00:39:34.995 "data_offset": 0, 00:39:34.995 "data_size": 65536 00:39:34.995 }, 00:39:34.995 { 00:39:34.995 "name": "BaseBdev2", 00:39:34.995 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:34.995 "is_configured": true, 00:39:34.995 "data_offset": 0, 00:39:34.995 "data_size": 65536 00:39:34.995 }, 00:39:34.995 { 00:39:34.995 "name": "BaseBdev3", 00:39:34.995 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:34.995 "is_configured": true, 00:39:34.995 "data_offset": 0, 00:39:34.995 "data_size": 65536 00:39:34.995 }, 00:39:34.995 { 00:39:34.995 "name": "BaseBdev4", 00:39:34.995 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:34.995 "is_configured": true, 00:39:34.995 "data_offset": 0, 00:39:34.995 "data_size": 65536 00:39:34.995 } 00:39:34.995 ] 00:39:34.995 }' 00:39:34.995 05:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:34.995 05:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.254 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.254 [2024-12-09 05:30:22.221201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:35.512 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.512 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:35.512 "name": "Existed_Raid", 00:39:35.512 "aliases": [ 00:39:35.512 "08e5e332-17c4-4444-b14e-1229f42eca09" 00:39:35.512 ], 00:39:35.512 "product_name": "Raid Volume", 00:39:35.512 "block_size": 512, 00:39:35.512 "num_blocks": 65536, 00:39:35.512 "uuid": "08e5e332-17c4-4444-b14e-1229f42eca09", 00:39:35.512 "assigned_rate_limits": { 00:39:35.512 "rw_ios_per_sec": 0, 00:39:35.512 "rw_mbytes_per_sec": 0, 00:39:35.512 "r_mbytes_per_sec": 0, 00:39:35.512 "w_mbytes_per_sec": 0 00:39:35.512 }, 00:39:35.512 "claimed": false, 00:39:35.512 "zoned": false, 00:39:35.512 "supported_io_types": { 00:39:35.512 "read": true, 00:39:35.512 "write": true, 00:39:35.512 "unmap": false, 00:39:35.512 "flush": false, 00:39:35.512 "reset": true, 00:39:35.512 "nvme_admin": false, 00:39:35.512 "nvme_io": false, 00:39:35.512 "nvme_io_md": false, 00:39:35.512 "write_zeroes": true, 00:39:35.512 "zcopy": false, 00:39:35.512 "get_zone_info": false, 00:39:35.512 "zone_management": false, 00:39:35.512 "zone_append": false, 00:39:35.512 "compare": false, 00:39:35.512 "compare_and_write": false, 00:39:35.512 "abort": false, 00:39:35.513 "seek_hole": false, 00:39:35.513 "seek_data": false, 00:39:35.513 "copy": false, 00:39:35.513 "nvme_iov_md": false 00:39:35.513 }, 00:39:35.513 "memory_domains": [ 00:39:35.513 { 00:39:35.513 "dma_device_id": "system", 00:39:35.513 "dma_device_type": 1 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:35.513 "dma_device_type": 2 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "system", 00:39:35.513 "dma_device_type": 1 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:35.513 "dma_device_type": 2 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "system", 00:39:35.513 "dma_device_type": 1 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:35.513 "dma_device_type": 2 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "system", 00:39:35.513 "dma_device_type": 1 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:35.513 "dma_device_type": 2 00:39:35.513 } 00:39:35.513 ], 00:39:35.513 "driver_specific": { 00:39:35.513 "raid": { 00:39:35.513 "uuid": "08e5e332-17c4-4444-b14e-1229f42eca09", 00:39:35.513 "strip_size_kb": 0, 00:39:35.513 "state": "online", 00:39:35.513 "raid_level": "raid1", 00:39:35.513 "superblock": false, 00:39:35.513 "num_base_bdevs": 4, 00:39:35.513 "num_base_bdevs_discovered": 4, 00:39:35.513 "num_base_bdevs_operational": 4, 00:39:35.513 "base_bdevs_list": [ 00:39:35.513 { 00:39:35.513 "name": "NewBaseBdev", 00:39:35.513 "uuid": "213b1821-1e75-46c9-bda9-95056cdc2221", 00:39:35.513 "is_configured": true, 00:39:35.513 "data_offset": 0, 00:39:35.513 "data_size": 65536 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "name": "BaseBdev2", 00:39:35.513 "uuid": "0b6e644c-bd59-4a28-b26b-ab0a8374661f", 00:39:35.513 "is_configured": true, 00:39:35.513 "data_offset": 0, 00:39:35.513 "data_size": 65536 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "name": "BaseBdev3", 00:39:35.513 "uuid": "3c80e3d6-48a5-4d6e-a1f1-c99deac664c2", 00:39:35.513 "is_configured": true, 00:39:35.513 "data_offset": 0, 00:39:35.513 "data_size": 65536 00:39:35.513 }, 00:39:35.513 { 00:39:35.513 "name": "BaseBdev4", 00:39:35.513 "uuid": "c7d0132e-e21c-4edd-af08-439069b7adf4", 00:39:35.513 "is_configured": true, 00:39:35.513 "data_offset": 0, 00:39:35.513 "data_size": 65536 00:39:35.513 } 00:39:35.513 ] 00:39:35.513 } 00:39:35.513 } 00:39:35.513 }' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:39:35.513 BaseBdev2 00:39:35.513 BaseBdev3 00:39:35.513 BaseBdev4' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.513 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.771 [2024-12-09 05:30:22.617031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:35.771 [2024-12-09 05:30:22.617097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:35.771 [2024-12-09 05:30:22.617239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:35.771 [2024-12-09 05:30:22.617647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:35.771 [2024-12-09 05:30:22.617671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73402 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73402 ']' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73402 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73402 00:39:35.771 killing process with pid 73402 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73402' 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73402 00:39:35.771 [2024-12-09 05:30:22.658132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:35.771 05:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73402 00:39:36.337 [2024-12-09 05:30:23.056725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:39:37.711 00:39:37.711 real 0m13.716s 00:39:37.711 user 0m22.427s 00:39:37.711 sys 0m2.019s 00:39:37.711 ************************************ 00:39:37.711 END TEST raid_state_function_test 00:39:37.711 ************************************ 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:37.711 05:30:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:39:37.711 05:30:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:37.711 05:30:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.711 05:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:37.711 ************************************ 00:39:37.711 START TEST raid_state_function_test_sb 00:39:37.711 ************************************ 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:39:37.711 Process raid pid: 74090 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74090 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74090' 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74090 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74090 ']' 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.711 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:37.711 [2024-12-09 05:30:24.574877] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:37.711 [2024-12-09 05:30:24.575322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.969 [2024-12-09 05:30:24.775145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.227 [2024-12-09 05:30:24.952454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.485 [2024-12-09 05:30:25.203749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:38.485 [2024-12-09 05:30:25.204108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:38.742 [2024-12-09 05:30:25.644450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:38.742 [2024-12-09 05:30:25.646056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:38.742 [2024-12-09 05:30:25.646088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:38.742 [2024-12-09 05:30:25.646107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:38.742 [2024-12-09 05:30:25.646118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:38.742 [2024-12-09 05:30:25.646132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:38.742 [2024-12-09 05:30:25.646151] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:38.742 [2024-12-09 05:30:25.646185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.742 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:38.742 "name": "Existed_Raid", 00:39:38.742 "uuid": "5a6a62f4-e82d-4171-b534-795563ea95e6", 00:39:38.742 "strip_size_kb": 0, 00:39:38.742 "state": "configuring", 00:39:38.742 "raid_level": "raid1", 00:39:38.742 "superblock": true, 00:39:38.742 "num_base_bdevs": 4, 00:39:38.742 "num_base_bdevs_discovered": 0, 00:39:38.742 "num_base_bdevs_operational": 4, 00:39:38.743 "base_bdevs_list": [ 00:39:38.743 { 00:39:38.743 "name": "BaseBdev1", 00:39:38.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.743 "is_configured": false, 00:39:38.743 "data_offset": 0, 00:39:38.743 "data_size": 0 00:39:38.743 }, 00:39:38.743 { 00:39:38.743 "name": "BaseBdev2", 00:39:38.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.743 "is_configured": false, 00:39:38.743 "data_offset": 0, 00:39:38.743 "data_size": 0 00:39:38.743 }, 00:39:38.743 { 00:39:38.743 "name": "BaseBdev3", 00:39:38.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.743 "is_configured": false, 00:39:38.743 "data_offset": 0, 00:39:38.743 "data_size": 0 00:39:38.743 }, 00:39:38.743 { 00:39:38.743 "name": "BaseBdev4", 00:39:38.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.743 "is_configured": false, 00:39:38.743 "data_offset": 0, 00:39:38.743 "data_size": 0 00:39:38.743 } 00:39:38.743 ] 00:39:38.743 }' 00:39:38.743 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:38.743 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.372 [2024-12-09 05:30:26.204602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:39.372 [2024-12-09 05:30:26.204892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.372 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.372 [2024-12-09 05:30:26.212581] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:39.372 [2024-12-09 05:30:26.212809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:39.372 [2024-12-09 05:30:26.212961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:39.372 [2024-12-09 05:30:26.213113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:39.372 [2024-12-09 05:30:26.213138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:39.372 [2024-12-09 05:30:26.213156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:39.372 [2024-12-09 05:30:26.213167] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:39.372 [2024-12-09 05:30:26.213182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.373 [2024-12-09 05:30:26.258824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:39.373 BaseBdev1 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.373 [ 00:39:39.373 { 00:39:39.373 "name": "BaseBdev1", 00:39:39.373 "aliases": [ 00:39:39.373 "bd460cd1-0b4f-4f53-8973-8aa0430804ce" 00:39:39.373 ], 00:39:39.373 "product_name": "Malloc disk", 00:39:39.373 "block_size": 512, 00:39:39.373 "num_blocks": 65536, 00:39:39.373 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:39.373 "assigned_rate_limits": { 00:39:39.373 "rw_ios_per_sec": 0, 00:39:39.373 "rw_mbytes_per_sec": 0, 00:39:39.373 "r_mbytes_per_sec": 0, 00:39:39.373 "w_mbytes_per_sec": 0 00:39:39.373 }, 00:39:39.373 "claimed": true, 00:39:39.373 "claim_type": "exclusive_write", 00:39:39.373 "zoned": false, 00:39:39.373 "supported_io_types": { 00:39:39.373 "read": true, 00:39:39.373 "write": true, 00:39:39.373 "unmap": true, 00:39:39.373 "flush": true, 00:39:39.373 "reset": true, 00:39:39.373 "nvme_admin": false, 00:39:39.373 "nvme_io": false, 00:39:39.373 "nvme_io_md": false, 00:39:39.373 "write_zeroes": true, 00:39:39.373 "zcopy": true, 00:39:39.373 "get_zone_info": false, 00:39:39.373 "zone_management": false, 00:39:39.373 "zone_append": false, 00:39:39.373 "compare": false, 00:39:39.373 "compare_and_write": false, 00:39:39.373 "abort": true, 00:39:39.373 "seek_hole": false, 00:39:39.373 "seek_data": false, 00:39:39.373 "copy": true, 00:39:39.373 "nvme_iov_md": false 00:39:39.373 }, 00:39:39.373 "memory_domains": [ 00:39:39.373 { 00:39:39.373 "dma_device_id": "system", 00:39:39.373 "dma_device_type": 1 00:39:39.373 }, 00:39:39.373 { 00:39:39.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:39.373 "dma_device_type": 2 00:39:39.373 } 00:39:39.373 ], 00:39:39.373 "driver_specific": {} 00:39:39.373 } 00:39:39.373 ] 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.373 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.631 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:39.631 "name": "Existed_Raid", 00:39:39.631 "uuid": "1e413346-dc16-4bf6-8d55-88245fb25515", 00:39:39.631 "strip_size_kb": 0, 00:39:39.631 "state": "configuring", 00:39:39.631 "raid_level": "raid1", 00:39:39.631 "superblock": true, 00:39:39.631 "num_base_bdevs": 4, 00:39:39.631 "num_base_bdevs_discovered": 1, 00:39:39.631 "num_base_bdevs_operational": 4, 00:39:39.631 "base_bdevs_list": [ 00:39:39.631 { 00:39:39.631 "name": "BaseBdev1", 00:39:39.631 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:39.631 "is_configured": true, 00:39:39.631 "data_offset": 2048, 00:39:39.631 "data_size": 63488 00:39:39.631 }, 00:39:39.631 { 00:39:39.631 "name": "BaseBdev2", 00:39:39.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.631 "is_configured": false, 00:39:39.631 "data_offset": 0, 00:39:39.631 "data_size": 0 00:39:39.631 }, 00:39:39.631 { 00:39:39.631 "name": "BaseBdev3", 00:39:39.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.631 "is_configured": false, 00:39:39.631 "data_offset": 0, 00:39:39.631 "data_size": 0 00:39:39.631 }, 00:39:39.631 { 00:39:39.631 "name": "BaseBdev4", 00:39:39.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.631 "is_configured": false, 00:39:39.631 "data_offset": 0, 00:39:39.631 "data_size": 0 00:39:39.631 } 00:39:39.631 ] 00:39:39.631 }' 00:39:39.631 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:39.631 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.889 [2024-12-09 05:30:26.827070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:39.889 [2024-12-09 05:30:26.827159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.889 [2024-12-09 05:30:26.835111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:39.889 [2024-12-09 05:30:26.837674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:39.889 [2024-12-09 05:30:26.837880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:39.889 [2024-12-09 05:30:26.838032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:39.889 [2024-12-09 05:30:26.838181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:39.889 [2024-12-09 05:30:26.838204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:39.889 [2024-12-09 05:30:26.838221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.889 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.147 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.147 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:40.147 "name": "Existed_Raid", 00:39:40.147 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:40.147 "strip_size_kb": 0, 00:39:40.147 "state": "configuring", 00:39:40.147 "raid_level": "raid1", 00:39:40.147 "superblock": true, 00:39:40.147 "num_base_bdevs": 4, 00:39:40.147 "num_base_bdevs_discovered": 1, 00:39:40.147 "num_base_bdevs_operational": 4, 00:39:40.147 "base_bdevs_list": [ 00:39:40.147 { 00:39:40.147 "name": "BaseBdev1", 00:39:40.147 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:40.147 "is_configured": true, 00:39:40.147 "data_offset": 2048, 00:39:40.147 "data_size": 63488 00:39:40.147 }, 00:39:40.147 { 00:39:40.147 "name": "BaseBdev2", 00:39:40.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.147 "is_configured": false, 00:39:40.147 "data_offset": 0, 00:39:40.147 "data_size": 0 00:39:40.147 }, 00:39:40.147 { 00:39:40.147 "name": "BaseBdev3", 00:39:40.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.147 "is_configured": false, 00:39:40.147 "data_offset": 0, 00:39:40.147 "data_size": 0 00:39:40.147 }, 00:39:40.147 { 00:39:40.147 "name": "BaseBdev4", 00:39:40.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.147 "is_configured": false, 00:39:40.147 "data_offset": 0, 00:39:40.147 "data_size": 0 00:39:40.147 } 00:39:40.147 ] 00:39:40.147 }' 00:39:40.147 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:40.147 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.406 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:40.406 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.406 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.665 BaseBdev2 00:39:40.665 [2024-12-09 05:30:27.404011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.665 [ 00:39:40.665 { 00:39:40.665 "name": "BaseBdev2", 00:39:40.665 "aliases": [ 00:39:40.665 "6e22e416-6e07-41a7-ad00-f4f94d68cd6a" 00:39:40.665 ], 00:39:40.665 "product_name": "Malloc disk", 00:39:40.665 "block_size": 512, 00:39:40.665 "num_blocks": 65536, 00:39:40.665 "uuid": "6e22e416-6e07-41a7-ad00-f4f94d68cd6a", 00:39:40.665 "assigned_rate_limits": { 00:39:40.665 "rw_ios_per_sec": 0, 00:39:40.665 "rw_mbytes_per_sec": 0, 00:39:40.665 "r_mbytes_per_sec": 0, 00:39:40.665 "w_mbytes_per_sec": 0 00:39:40.665 }, 00:39:40.665 "claimed": true, 00:39:40.665 "claim_type": "exclusive_write", 00:39:40.665 "zoned": false, 00:39:40.665 "supported_io_types": { 00:39:40.665 "read": true, 00:39:40.665 "write": true, 00:39:40.665 "unmap": true, 00:39:40.665 "flush": true, 00:39:40.665 "reset": true, 00:39:40.665 "nvme_admin": false, 00:39:40.665 "nvme_io": false, 00:39:40.665 "nvme_io_md": false, 00:39:40.665 "write_zeroes": true, 00:39:40.665 "zcopy": true, 00:39:40.665 "get_zone_info": false, 00:39:40.665 "zone_management": false, 00:39:40.665 "zone_append": false, 00:39:40.665 "compare": false, 00:39:40.665 "compare_and_write": false, 00:39:40.665 "abort": true, 00:39:40.665 "seek_hole": false, 00:39:40.665 "seek_data": false, 00:39:40.665 "copy": true, 00:39:40.665 "nvme_iov_md": false 00:39:40.665 }, 00:39:40.665 "memory_domains": [ 00:39:40.665 { 00:39:40.665 "dma_device_id": "system", 00:39:40.665 "dma_device_type": 1 00:39:40.665 }, 00:39:40.665 { 00:39:40.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:40.665 "dma_device_type": 2 00:39:40.665 } 00:39:40.665 ], 00:39:40.665 "driver_specific": {} 00:39:40.665 } 00:39:40.665 ] 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:40.665 "name": "Existed_Raid", 00:39:40.665 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:40.665 "strip_size_kb": 0, 00:39:40.665 "state": "configuring", 00:39:40.665 "raid_level": "raid1", 00:39:40.665 "superblock": true, 00:39:40.665 "num_base_bdevs": 4, 00:39:40.665 "num_base_bdevs_discovered": 2, 00:39:40.665 "num_base_bdevs_operational": 4, 00:39:40.665 "base_bdevs_list": [ 00:39:40.665 { 00:39:40.665 "name": "BaseBdev1", 00:39:40.665 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:40.665 "is_configured": true, 00:39:40.665 "data_offset": 2048, 00:39:40.665 "data_size": 63488 00:39:40.665 }, 00:39:40.665 { 00:39:40.665 "name": "BaseBdev2", 00:39:40.665 "uuid": "6e22e416-6e07-41a7-ad00-f4f94d68cd6a", 00:39:40.665 "is_configured": true, 00:39:40.665 "data_offset": 2048, 00:39:40.665 "data_size": 63488 00:39:40.665 }, 00:39:40.665 { 00:39:40.665 "name": "BaseBdev3", 00:39:40.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.665 "is_configured": false, 00:39:40.665 "data_offset": 0, 00:39:40.665 "data_size": 0 00:39:40.665 }, 00:39:40.665 { 00:39:40.665 "name": "BaseBdev4", 00:39:40.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.665 "is_configured": false, 00:39:40.665 "data_offset": 0, 00:39:40.665 "data_size": 0 00:39:40.665 } 00:39:40.665 ] 00:39:40.665 }' 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:40.665 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.232 05:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:41.232 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.232 05:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.232 [2024-12-09 05:30:28.023403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:41.232 BaseBdev3 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.232 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.232 [ 00:39:41.232 { 00:39:41.232 "name": "BaseBdev3", 00:39:41.232 "aliases": [ 00:39:41.232 "6f6ca8a9-b778-401f-b602-04b87716ca07" 00:39:41.232 ], 00:39:41.232 "product_name": "Malloc disk", 00:39:41.232 "block_size": 512, 00:39:41.232 "num_blocks": 65536, 00:39:41.232 "uuid": "6f6ca8a9-b778-401f-b602-04b87716ca07", 00:39:41.232 "assigned_rate_limits": { 00:39:41.232 "rw_ios_per_sec": 0, 00:39:41.232 "rw_mbytes_per_sec": 0, 00:39:41.232 "r_mbytes_per_sec": 0, 00:39:41.232 "w_mbytes_per_sec": 0 00:39:41.232 }, 00:39:41.232 "claimed": true, 00:39:41.232 "claim_type": "exclusive_write", 00:39:41.232 "zoned": false, 00:39:41.232 "supported_io_types": { 00:39:41.233 "read": true, 00:39:41.233 "write": true, 00:39:41.233 "unmap": true, 00:39:41.233 "flush": true, 00:39:41.233 "reset": true, 00:39:41.233 "nvme_admin": false, 00:39:41.233 "nvme_io": false, 00:39:41.233 "nvme_io_md": false, 00:39:41.233 "write_zeroes": true, 00:39:41.233 "zcopy": true, 00:39:41.233 "get_zone_info": false, 00:39:41.233 "zone_management": false, 00:39:41.233 "zone_append": false, 00:39:41.233 "compare": false, 00:39:41.233 "compare_and_write": false, 00:39:41.233 "abort": true, 00:39:41.233 "seek_hole": false, 00:39:41.233 "seek_data": false, 00:39:41.233 "copy": true, 00:39:41.233 "nvme_iov_md": false 00:39:41.233 }, 00:39:41.233 "memory_domains": [ 00:39:41.233 { 00:39:41.233 "dma_device_id": "system", 00:39:41.233 "dma_device_type": 1 00:39:41.233 }, 00:39:41.233 { 00:39:41.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:41.233 "dma_device_type": 2 00:39:41.233 } 00:39:41.233 ], 00:39:41.233 "driver_specific": {} 00:39:41.233 } 00:39:41.233 ] 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:41.233 "name": "Existed_Raid", 00:39:41.233 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:41.233 "strip_size_kb": 0, 00:39:41.233 "state": "configuring", 00:39:41.233 "raid_level": "raid1", 00:39:41.233 "superblock": true, 00:39:41.233 "num_base_bdevs": 4, 00:39:41.233 "num_base_bdevs_discovered": 3, 00:39:41.233 "num_base_bdevs_operational": 4, 00:39:41.233 "base_bdevs_list": [ 00:39:41.233 { 00:39:41.233 "name": "BaseBdev1", 00:39:41.233 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:41.233 "is_configured": true, 00:39:41.233 "data_offset": 2048, 00:39:41.233 "data_size": 63488 00:39:41.233 }, 00:39:41.233 { 00:39:41.233 "name": "BaseBdev2", 00:39:41.233 "uuid": "6e22e416-6e07-41a7-ad00-f4f94d68cd6a", 00:39:41.233 "is_configured": true, 00:39:41.233 "data_offset": 2048, 00:39:41.233 "data_size": 63488 00:39:41.233 }, 00:39:41.233 { 00:39:41.233 "name": "BaseBdev3", 00:39:41.233 "uuid": "6f6ca8a9-b778-401f-b602-04b87716ca07", 00:39:41.233 "is_configured": true, 00:39:41.233 "data_offset": 2048, 00:39:41.233 "data_size": 63488 00:39:41.233 }, 00:39:41.233 { 00:39:41.233 "name": "BaseBdev4", 00:39:41.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.233 "is_configured": false, 00:39:41.233 "data_offset": 0, 00:39:41.233 "data_size": 0 00:39:41.233 } 00:39:41.233 ] 00:39:41.233 }' 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:41.233 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.801 [2024-12-09 05:30:28.623445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:41.801 [2024-12-09 05:30:28.623759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:41.801 [2024-12-09 05:30:28.623847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:41.801 BaseBdev4 00:39:41.801 [2024-12-09 05:30:28.624253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:41.801 [2024-12-09 05:30:28.624464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:41.801 [2024-12-09 05:30:28.624494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.801 [2024-12-09 05:30:28.624716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.801 [ 00:39:41.801 { 00:39:41.801 "name": "BaseBdev4", 00:39:41.801 "aliases": [ 00:39:41.801 "cc669e87-4e60-45fa-bebe-c5130a1b9bf6" 00:39:41.801 ], 00:39:41.801 "product_name": "Malloc disk", 00:39:41.801 "block_size": 512, 00:39:41.801 "num_blocks": 65536, 00:39:41.801 "uuid": "cc669e87-4e60-45fa-bebe-c5130a1b9bf6", 00:39:41.801 "assigned_rate_limits": { 00:39:41.801 "rw_ios_per_sec": 0, 00:39:41.801 "rw_mbytes_per_sec": 0, 00:39:41.801 "r_mbytes_per_sec": 0, 00:39:41.801 "w_mbytes_per_sec": 0 00:39:41.801 }, 00:39:41.801 "claimed": true, 00:39:41.801 "claim_type": "exclusive_write", 00:39:41.801 "zoned": false, 00:39:41.801 "supported_io_types": { 00:39:41.801 "read": true, 00:39:41.801 "write": true, 00:39:41.801 "unmap": true, 00:39:41.801 "flush": true, 00:39:41.801 "reset": true, 00:39:41.801 "nvme_admin": false, 00:39:41.801 "nvme_io": false, 00:39:41.801 "nvme_io_md": false, 00:39:41.801 "write_zeroes": true, 00:39:41.801 "zcopy": true, 00:39:41.801 "get_zone_info": false, 00:39:41.801 "zone_management": false, 00:39:41.801 "zone_append": false, 00:39:41.801 "compare": false, 00:39:41.801 "compare_and_write": false, 00:39:41.801 "abort": true, 00:39:41.801 "seek_hole": false, 00:39:41.801 "seek_data": false, 00:39:41.801 "copy": true, 00:39:41.801 "nvme_iov_md": false 00:39:41.801 }, 00:39:41.801 "memory_domains": [ 00:39:41.801 { 00:39:41.801 "dma_device_id": "system", 00:39:41.801 "dma_device_type": 1 00:39:41.801 }, 00:39:41.801 { 00:39:41.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:41.801 "dma_device_type": 2 00:39:41.801 } 00:39:41.801 ], 00:39:41.801 "driver_specific": {} 00:39:41.801 } 00:39:41.801 ] 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.801 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.802 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:41.802 "name": "Existed_Raid", 00:39:41.802 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:41.802 "strip_size_kb": 0, 00:39:41.802 "state": "online", 00:39:41.802 "raid_level": "raid1", 00:39:41.802 "superblock": true, 00:39:41.802 "num_base_bdevs": 4, 00:39:41.802 "num_base_bdevs_discovered": 4, 00:39:41.802 "num_base_bdevs_operational": 4, 00:39:41.802 "base_bdevs_list": [ 00:39:41.802 { 00:39:41.802 "name": "BaseBdev1", 00:39:41.802 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:41.802 "is_configured": true, 00:39:41.802 "data_offset": 2048, 00:39:41.802 "data_size": 63488 00:39:41.802 }, 00:39:41.802 { 00:39:41.802 "name": "BaseBdev2", 00:39:41.802 "uuid": "6e22e416-6e07-41a7-ad00-f4f94d68cd6a", 00:39:41.802 "is_configured": true, 00:39:41.802 "data_offset": 2048, 00:39:41.802 "data_size": 63488 00:39:41.802 }, 00:39:41.802 { 00:39:41.802 "name": "BaseBdev3", 00:39:41.802 "uuid": "6f6ca8a9-b778-401f-b602-04b87716ca07", 00:39:41.802 "is_configured": true, 00:39:41.802 "data_offset": 2048, 00:39:41.802 "data_size": 63488 00:39:41.802 }, 00:39:41.802 { 00:39:41.802 "name": "BaseBdev4", 00:39:41.802 "uuid": "cc669e87-4e60-45fa-bebe-c5130a1b9bf6", 00:39:41.802 "is_configured": true, 00:39:41.802 "data_offset": 2048, 00:39:41.802 "data_size": 63488 00:39:41.802 } 00:39:41.802 ] 00:39:41.802 }' 00:39:41.802 05:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:41.802 05:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.369 [2024-12-09 05:30:29.208251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:42.369 "name": "Existed_Raid", 00:39:42.369 "aliases": [ 00:39:42.369 "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6" 00:39:42.369 ], 00:39:42.369 "product_name": "Raid Volume", 00:39:42.369 "block_size": 512, 00:39:42.369 "num_blocks": 63488, 00:39:42.369 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:42.369 "assigned_rate_limits": { 00:39:42.369 "rw_ios_per_sec": 0, 00:39:42.369 "rw_mbytes_per_sec": 0, 00:39:42.369 "r_mbytes_per_sec": 0, 00:39:42.369 "w_mbytes_per_sec": 0 00:39:42.369 }, 00:39:42.369 "claimed": false, 00:39:42.369 "zoned": false, 00:39:42.369 "supported_io_types": { 00:39:42.369 "read": true, 00:39:42.369 "write": true, 00:39:42.369 "unmap": false, 00:39:42.369 "flush": false, 00:39:42.369 "reset": true, 00:39:42.369 "nvme_admin": false, 00:39:42.369 "nvme_io": false, 00:39:42.369 "nvme_io_md": false, 00:39:42.369 "write_zeroes": true, 00:39:42.369 "zcopy": false, 00:39:42.369 "get_zone_info": false, 00:39:42.369 "zone_management": false, 00:39:42.369 "zone_append": false, 00:39:42.369 "compare": false, 00:39:42.369 "compare_and_write": false, 00:39:42.369 "abort": false, 00:39:42.369 "seek_hole": false, 00:39:42.369 "seek_data": false, 00:39:42.369 "copy": false, 00:39:42.369 "nvme_iov_md": false 00:39:42.369 }, 00:39:42.369 "memory_domains": [ 00:39:42.369 { 00:39:42.369 "dma_device_id": "system", 00:39:42.369 "dma_device_type": 1 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:42.369 "dma_device_type": 2 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "system", 00:39:42.369 "dma_device_type": 1 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:42.369 "dma_device_type": 2 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "system", 00:39:42.369 "dma_device_type": 1 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:42.369 "dma_device_type": 2 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "system", 00:39:42.369 "dma_device_type": 1 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:42.369 "dma_device_type": 2 00:39:42.369 } 00:39:42.369 ], 00:39:42.369 "driver_specific": { 00:39:42.369 "raid": { 00:39:42.369 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:42.369 "strip_size_kb": 0, 00:39:42.369 "state": "online", 00:39:42.369 "raid_level": "raid1", 00:39:42.369 "superblock": true, 00:39:42.369 "num_base_bdevs": 4, 00:39:42.369 "num_base_bdevs_discovered": 4, 00:39:42.369 "num_base_bdevs_operational": 4, 00:39:42.369 "base_bdevs_list": [ 00:39:42.369 { 00:39:42.369 "name": "BaseBdev1", 00:39:42.369 "uuid": "bd460cd1-0b4f-4f53-8973-8aa0430804ce", 00:39:42.369 "is_configured": true, 00:39:42.369 "data_offset": 2048, 00:39:42.369 "data_size": 63488 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "name": "BaseBdev2", 00:39:42.369 "uuid": "6e22e416-6e07-41a7-ad00-f4f94d68cd6a", 00:39:42.369 "is_configured": true, 00:39:42.369 "data_offset": 2048, 00:39:42.369 "data_size": 63488 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "name": "BaseBdev3", 00:39:42.369 "uuid": "6f6ca8a9-b778-401f-b602-04b87716ca07", 00:39:42.369 "is_configured": true, 00:39:42.369 "data_offset": 2048, 00:39:42.369 "data_size": 63488 00:39:42.369 }, 00:39:42.369 { 00:39:42.369 "name": "BaseBdev4", 00:39:42.369 "uuid": "cc669e87-4e60-45fa-bebe-c5130a1b9bf6", 00:39:42.369 "is_configured": true, 00:39:42.369 "data_offset": 2048, 00:39:42.369 "data_size": 63488 00:39:42.369 } 00:39:42.369 ] 00:39:42.369 } 00:39:42.369 } 00:39:42.369 }' 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:42.369 BaseBdev2 00:39:42.369 BaseBdev3 00:39:42.369 BaseBdev4' 00:39:42.369 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:42.628 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.629 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.629 [2024-12-09 05:30:29.579986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:42.888 "name": "Existed_Raid", 00:39:42.888 "uuid": "e8fc065e-fc9d-4ed6-9206-cf8a8dadefb6", 00:39:42.888 "strip_size_kb": 0, 00:39:42.888 "state": "online", 00:39:42.888 "raid_level": "raid1", 00:39:42.888 "superblock": true, 00:39:42.888 "num_base_bdevs": 4, 00:39:42.888 "num_base_bdevs_discovered": 3, 00:39:42.888 "num_base_bdevs_operational": 3, 00:39:42.888 "base_bdevs_list": [ 00:39:42.888 { 00:39:42.888 "name": null, 00:39:42.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:42.888 "is_configured": false, 00:39:42.888 "data_offset": 0, 00:39:42.888 "data_size": 63488 00:39:42.888 }, 00:39:42.888 { 00:39:42.888 "name": "BaseBdev2", 00:39:42.888 "uuid": "6e22e416-6e07-41a7-ad00-f4f94d68cd6a", 00:39:42.888 "is_configured": true, 00:39:42.888 "data_offset": 2048, 00:39:42.888 "data_size": 63488 00:39:42.888 }, 00:39:42.888 { 00:39:42.888 "name": "BaseBdev3", 00:39:42.888 "uuid": "6f6ca8a9-b778-401f-b602-04b87716ca07", 00:39:42.888 "is_configured": true, 00:39:42.888 "data_offset": 2048, 00:39:42.888 "data_size": 63488 00:39:42.888 }, 00:39:42.888 { 00:39:42.888 "name": "BaseBdev4", 00:39:42.888 "uuid": "cc669e87-4e60-45fa-bebe-c5130a1b9bf6", 00:39:42.888 "is_configured": true, 00:39:42.888 "data_offset": 2048, 00:39:42.888 "data_size": 63488 00:39:42.888 } 00:39:42.888 ] 00:39:42.888 }' 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:42.888 05:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.456 [2024-12-09 05:30:30.250221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.456 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.456 [2024-12-09 05:30:30.402342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.715 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.715 [2024-12-09 05:30:30.557421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:39:43.715 [2024-12-09 05:30:30.557722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:43.715 [2024-12-09 05:30:30.647390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:43.715 [2024-12-09 05:30:30.647703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:43.716 [2024-12-09 05:30:30.647739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:43.716 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.975 BaseBdev2 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.975 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.975 [ 00:39:43.975 { 00:39:43.975 "name": "BaseBdev2", 00:39:43.975 "aliases": [ 00:39:43.975 "34edf619-26d6-4080-812f-2c99021b93c6" 00:39:43.975 ], 00:39:43.975 "product_name": "Malloc disk", 00:39:43.975 "block_size": 512, 00:39:43.975 "num_blocks": 65536, 00:39:43.975 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:43.975 "assigned_rate_limits": { 00:39:43.975 "rw_ios_per_sec": 0, 00:39:43.975 "rw_mbytes_per_sec": 0, 00:39:43.975 "r_mbytes_per_sec": 0, 00:39:43.975 "w_mbytes_per_sec": 0 00:39:43.975 }, 00:39:43.975 "claimed": false, 00:39:43.975 "zoned": false, 00:39:43.975 "supported_io_types": { 00:39:43.975 "read": true, 00:39:43.975 "write": true, 00:39:43.975 "unmap": true, 00:39:43.975 "flush": true, 00:39:43.975 "reset": true, 00:39:43.975 "nvme_admin": false, 00:39:43.975 "nvme_io": false, 00:39:43.975 "nvme_io_md": false, 00:39:43.975 "write_zeroes": true, 00:39:43.975 "zcopy": true, 00:39:43.975 "get_zone_info": false, 00:39:43.975 "zone_management": false, 00:39:43.975 "zone_append": false, 00:39:43.975 "compare": false, 00:39:43.975 "compare_and_write": false, 00:39:43.975 "abort": true, 00:39:43.975 "seek_hole": false, 00:39:43.975 "seek_data": false, 00:39:43.975 "copy": true, 00:39:43.975 "nvme_iov_md": false 00:39:43.975 }, 00:39:43.975 "memory_domains": [ 00:39:43.975 { 00:39:43.975 "dma_device_id": "system", 00:39:43.975 "dma_device_type": 1 00:39:43.975 }, 00:39:43.975 { 00:39:43.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:43.975 "dma_device_type": 2 00:39:43.975 } 00:39:43.975 ], 00:39:43.975 "driver_specific": {} 00:39:43.975 } 00:39:43.975 ] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 BaseBdev3 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 [ 00:39:43.976 { 00:39:43.976 "name": "BaseBdev3", 00:39:43.976 "aliases": [ 00:39:43.976 "cfb337a4-0de9-4dcc-8696-e39977b065a7" 00:39:43.976 ], 00:39:43.976 "product_name": "Malloc disk", 00:39:43.976 "block_size": 512, 00:39:43.976 "num_blocks": 65536, 00:39:43.976 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:43.976 "assigned_rate_limits": { 00:39:43.976 "rw_ios_per_sec": 0, 00:39:43.976 "rw_mbytes_per_sec": 0, 00:39:43.976 "r_mbytes_per_sec": 0, 00:39:43.976 "w_mbytes_per_sec": 0 00:39:43.976 }, 00:39:43.976 "claimed": false, 00:39:43.976 "zoned": false, 00:39:43.976 "supported_io_types": { 00:39:43.976 "read": true, 00:39:43.976 "write": true, 00:39:43.976 "unmap": true, 00:39:43.976 "flush": true, 00:39:43.976 "reset": true, 00:39:43.976 "nvme_admin": false, 00:39:43.976 "nvme_io": false, 00:39:43.976 "nvme_io_md": false, 00:39:43.976 "write_zeroes": true, 00:39:43.976 "zcopy": true, 00:39:43.976 "get_zone_info": false, 00:39:43.976 "zone_management": false, 00:39:43.976 "zone_append": false, 00:39:43.976 "compare": false, 00:39:43.976 "compare_and_write": false, 00:39:43.976 "abort": true, 00:39:43.976 "seek_hole": false, 00:39:43.976 "seek_data": false, 00:39:43.976 "copy": true, 00:39:43.976 "nvme_iov_md": false 00:39:43.976 }, 00:39:43.976 "memory_domains": [ 00:39:43.976 { 00:39:43.976 "dma_device_id": "system", 00:39:43.976 "dma_device_type": 1 00:39:43.976 }, 00:39:43.976 { 00:39:43.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:43.976 "dma_device_type": 2 00:39:43.976 } 00:39:43.976 ], 00:39:43.976 "driver_specific": {} 00:39:43.976 } 00:39:43.976 ] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 BaseBdev4 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 [ 00:39:43.976 { 00:39:43.976 "name": "BaseBdev4", 00:39:43.976 "aliases": [ 00:39:43.976 "f069e1c2-7eaf-4c02-b549-29f64af3823d" 00:39:43.976 ], 00:39:43.976 "product_name": "Malloc disk", 00:39:43.976 "block_size": 512, 00:39:43.976 "num_blocks": 65536, 00:39:43.976 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:43.976 "assigned_rate_limits": { 00:39:43.976 "rw_ios_per_sec": 0, 00:39:43.976 "rw_mbytes_per_sec": 0, 00:39:43.976 "r_mbytes_per_sec": 0, 00:39:43.976 "w_mbytes_per_sec": 0 00:39:43.976 }, 00:39:43.976 "claimed": false, 00:39:43.976 "zoned": false, 00:39:43.976 "supported_io_types": { 00:39:43.976 "read": true, 00:39:43.976 "write": true, 00:39:43.976 "unmap": true, 00:39:43.976 "flush": true, 00:39:43.976 "reset": true, 00:39:43.976 "nvme_admin": false, 00:39:43.976 "nvme_io": false, 00:39:43.976 "nvme_io_md": false, 00:39:43.976 "write_zeroes": true, 00:39:43.976 "zcopy": true, 00:39:43.976 "get_zone_info": false, 00:39:43.976 "zone_management": false, 00:39:43.976 "zone_append": false, 00:39:43.976 "compare": false, 00:39:43.976 "compare_and_write": false, 00:39:43.976 "abort": true, 00:39:43.976 "seek_hole": false, 00:39:43.976 "seek_data": false, 00:39:43.976 "copy": true, 00:39:43.976 "nvme_iov_md": false 00:39:43.976 }, 00:39:43.976 "memory_domains": [ 00:39:43.976 { 00:39:43.976 "dma_device_id": "system", 00:39:43.976 "dma_device_type": 1 00:39:43.976 }, 00:39:43.976 { 00:39:43.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:43.976 "dma_device_type": 2 00:39:43.976 } 00:39:43.976 ], 00:39:43.976 "driver_specific": {} 00:39:43.976 } 00:39:43.976 ] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 [2024-12-09 05:30:30.935901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:43.976 [2024-12-09 05:30:30.936166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:43.976 [2024-12-09 05:30:30.936300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:43.976 [2024-12-09 05:30:30.939023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:43.976 [2024-12-09 05:30:30.939094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.976 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.235 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.236 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:44.236 "name": "Existed_Raid", 00:39:44.236 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:44.236 "strip_size_kb": 0, 00:39:44.236 "state": "configuring", 00:39:44.236 "raid_level": "raid1", 00:39:44.236 "superblock": true, 00:39:44.236 "num_base_bdevs": 4, 00:39:44.236 "num_base_bdevs_discovered": 3, 00:39:44.236 "num_base_bdevs_operational": 4, 00:39:44.236 "base_bdevs_list": [ 00:39:44.236 { 00:39:44.236 "name": "BaseBdev1", 00:39:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.236 "is_configured": false, 00:39:44.236 "data_offset": 0, 00:39:44.236 "data_size": 0 00:39:44.236 }, 00:39:44.236 { 00:39:44.236 "name": "BaseBdev2", 00:39:44.236 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:44.236 "is_configured": true, 00:39:44.236 "data_offset": 2048, 00:39:44.236 "data_size": 63488 00:39:44.236 }, 00:39:44.236 { 00:39:44.236 "name": "BaseBdev3", 00:39:44.236 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:44.236 "is_configured": true, 00:39:44.236 "data_offset": 2048, 00:39:44.236 "data_size": 63488 00:39:44.236 }, 00:39:44.236 { 00:39:44.236 "name": "BaseBdev4", 00:39:44.236 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:44.236 "is_configured": true, 00:39:44.236 "data_offset": 2048, 00:39:44.236 "data_size": 63488 00:39:44.236 } 00:39:44.236 ] 00:39:44.236 }' 00:39:44.236 05:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:44.236 05:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.495 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:44.495 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.495 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.754 [2024-12-09 05:30:31.468228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.754 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:44.754 "name": "Existed_Raid", 00:39:44.754 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:44.754 "strip_size_kb": 0, 00:39:44.754 "state": "configuring", 00:39:44.754 "raid_level": "raid1", 00:39:44.754 "superblock": true, 00:39:44.754 "num_base_bdevs": 4, 00:39:44.754 "num_base_bdevs_discovered": 2, 00:39:44.754 "num_base_bdevs_operational": 4, 00:39:44.754 "base_bdevs_list": [ 00:39:44.754 { 00:39:44.754 "name": "BaseBdev1", 00:39:44.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.754 "is_configured": false, 00:39:44.754 "data_offset": 0, 00:39:44.754 "data_size": 0 00:39:44.754 }, 00:39:44.754 { 00:39:44.754 "name": null, 00:39:44.754 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:44.754 "is_configured": false, 00:39:44.754 "data_offset": 0, 00:39:44.754 "data_size": 63488 00:39:44.754 }, 00:39:44.754 { 00:39:44.754 "name": "BaseBdev3", 00:39:44.754 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:44.754 "is_configured": true, 00:39:44.754 "data_offset": 2048, 00:39:44.754 "data_size": 63488 00:39:44.754 }, 00:39:44.754 { 00:39:44.754 "name": "BaseBdev4", 00:39:44.754 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:44.754 "is_configured": true, 00:39:44.754 "data_offset": 2048, 00:39:44.754 "data_size": 63488 00:39:44.754 } 00:39:44.754 ] 00:39:44.754 }' 00:39:44.755 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:44.755 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.013 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:45.013 05:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:45.013 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.013 05:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.271 [2024-12-09 05:30:32.088742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:45.271 BaseBdev1 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.271 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.271 [ 00:39:45.271 { 00:39:45.271 "name": "BaseBdev1", 00:39:45.271 "aliases": [ 00:39:45.271 "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d" 00:39:45.271 ], 00:39:45.271 "product_name": "Malloc disk", 00:39:45.271 "block_size": 512, 00:39:45.271 "num_blocks": 65536, 00:39:45.271 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:45.271 "assigned_rate_limits": { 00:39:45.271 "rw_ios_per_sec": 0, 00:39:45.271 "rw_mbytes_per_sec": 0, 00:39:45.271 "r_mbytes_per_sec": 0, 00:39:45.271 "w_mbytes_per_sec": 0 00:39:45.271 }, 00:39:45.271 "claimed": true, 00:39:45.271 "claim_type": "exclusive_write", 00:39:45.271 "zoned": false, 00:39:45.271 "supported_io_types": { 00:39:45.271 "read": true, 00:39:45.271 "write": true, 00:39:45.271 "unmap": true, 00:39:45.271 "flush": true, 00:39:45.271 "reset": true, 00:39:45.271 "nvme_admin": false, 00:39:45.271 "nvme_io": false, 00:39:45.271 "nvme_io_md": false, 00:39:45.271 "write_zeroes": true, 00:39:45.271 "zcopy": true, 00:39:45.271 "get_zone_info": false, 00:39:45.271 "zone_management": false, 00:39:45.271 "zone_append": false, 00:39:45.271 "compare": false, 00:39:45.271 "compare_and_write": false, 00:39:45.272 "abort": true, 00:39:45.272 "seek_hole": false, 00:39:45.272 "seek_data": false, 00:39:45.272 "copy": true, 00:39:45.272 "nvme_iov_md": false 00:39:45.272 }, 00:39:45.272 "memory_domains": [ 00:39:45.272 { 00:39:45.272 "dma_device_id": "system", 00:39:45.272 "dma_device_type": 1 00:39:45.272 }, 00:39:45.272 { 00:39:45.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:45.272 "dma_device_type": 2 00:39:45.272 } 00:39:45.272 ], 00:39:45.272 "driver_specific": {} 00:39:45.272 } 00:39:45.272 ] 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:45.272 "name": "Existed_Raid", 00:39:45.272 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:45.272 "strip_size_kb": 0, 00:39:45.272 "state": "configuring", 00:39:45.272 "raid_level": "raid1", 00:39:45.272 "superblock": true, 00:39:45.272 "num_base_bdevs": 4, 00:39:45.272 "num_base_bdevs_discovered": 3, 00:39:45.272 "num_base_bdevs_operational": 4, 00:39:45.272 "base_bdevs_list": [ 00:39:45.272 { 00:39:45.272 "name": "BaseBdev1", 00:39:45.272 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:45.272 "is_configured": true, 00:39:45.272 "data_offset": 2048, 00:39:45.272 "data_size": 63488 00:39:45.272 }, 00:39:45.272 { 00:39:45.272 "name": null, 00:39:45.272 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:45.272 "is_configured": false, 00:39:45.272 "data_offset": 0, 00:39:45.272 "data_size": 63488 00:39:45.272 }, 00:39:45.272 { 00:39:45.272 "name": "BaseBdev3", 00:39:45.272 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:45.272 "is_configured": true, 00:39:45.272 "data_offset": 2048, 00:39:45.272 "data_size": 63488 00:39:45.272 }, 00:39:45.272 { 00:39:45.272 "name": "BaseBdev4", 00:39:45.272 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:45.272 "is_configured": true, 00:39:45.272 "data_offset": 2048, 00:39:45.272 "data_size": 63488 00:39:45.272 } 00:39:45.272 ] 00:39:45.272 }' 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:45.272 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.838 [2024-12-09 05:30:32.741098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:45.838 "name": "Existed_Raid", 00:39:45.838 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:45.838 "strip_size_kb": 0, 00:39:45.838 "state": "configuring", 00:39:45.838 "raid_level": "raid1", 00:39:45.838 "superblock": true, 00:39:45.838 "num_base_bdevs": 4, 00:39:45.838 "num_base_bdevs_discovered": 2, 00:39:45.838 "num_base_bdevs_operational": 4, 00:39:45.838 "base_bdevs_list": [ 00:39:45.838 { 00:39:45.838 "name": "BaseBdev1", 00:39:45.838 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:45.838 "is_configured": true, 00:39:45.838 "data_offset": 2048, 00:39:45.838 "data_size": 63488 00:39:45.838 }, 00:39:45.838 { 00:39:45.838 "name": null, 00:39:45.838 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:45.838 "is_configured": false, 00:39:45.838 "data_offset": 0, 00:39:45.838 "data_size": 63488 00:39:45.838 }, 00:39:45.838 { 00:39:45.838 "name": null, 00:39:45.838 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:45.838 "is_configured": false, 00:39:45.838 "data_offset": 0, 00:39:45.838 "data_size": 63488 00:39:45.838 }, 00:39:45.838 { 00:39:45.838 "name": "BaseBdev4", 00:39:45.838 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:45.838 "is_configured": true, 00:39:45.838 "data_offset": 2048, 00:39:45.838 "data_size": 63488 00:39:45.838 } 00:39:45.838 ] 00:39:45.838 }' 00:39:45.838 05:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:45.839 05:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.404 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.404 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:46.404 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.405 [2024-12-09 05:30:33.297240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:46.405 "name": "Existed_Raid", 00:39:46.405 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:46.405 "strip_size_kb": 0, 00:39:46.405 "state": "configuring", 00:39:46.405 "raid_level": "raid1", 00:39:46.405 "superblock": true, 00:39:46.405 "num_base_bdevs": 4, 00:39:46.405 "num_base_bdevs_discovered": 3, 00:39:46.405 "num_base_bdevs_operational": 4, 00:39:46.405 "base_bdevs_list": [ 00:39:46.405 { 00:39:46.405 "name": "BaseBdev1", 00:39:46.405 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:46.405 "is_configured": true, 00:39:46.405 "data_offset": 2048, 00:39:46.405 "data_size": 63488 00:39:46.405 }, 00:39:46.405 { 00:39:46.405 "name": null, 00:39:46.405 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:46.405 "is_configured": false, 00:39:46.405 "data_offset": 0, 00:39:46.405 "data_size": 63488 00:39:46.405 }, 00:39:46.405 { 00:39:46.405 "name": "BaseBdev3", 00:39:46.405 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:46.405 "is_configured": true, 00:39:46.405 "data_offset": 2048, 00:39:46.405 "data_size": 63488 00:39:46.405 }, 00:39:46.405 { 00:39:46.405 "name": "BaseBdev4", 00:39:46.405 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:46.405 "is_configured": true, 00:39:46.405 "data_offset": 2048, 00:39:46.405 "data_size": 63488 00:39:46.405 } 00:39:46.405 ] 00:39:46.405 }' 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:46.405 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.970 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.970 [2024-12-09 05:30:33.869490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:47.228 05:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.228 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:47.228 "name": "Existed_Raid", 00:39:47.228 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:47.228 "strip_size_kb": 0, 00:39:47.228 "state": "configuring", 00:39:47.228 "raid_level": "raid1", 00:39:47.228 "superblock": true, 00:39:47.228 "num_base_bdevs": 4, 00:39:47.228 "num_base_bdevs_discovered": 2, 00:39:47.228 "num_base_bdevs_operational": 4, 00:39:47.228 "base_bdevs_list": [ 00:39:47.228 { 00:39:47.228 "name": null, 00:39:47.228 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:47.228 "is_configured": false, 00:39:47.228 "data_offset": 0, 00:39:47.228 "data_size": 63488 00:39:47.228 }, 00:39:47.228 { 00:39:47.228 "name": null, 00:39:47.228 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:47.228 "is_configured": false, 00:39:47.228 "data_offset": 0, 00:39:47.228 "data_size": 63488 00:39:47.228 }, 00:39:47.228 { 00:39:47.228 "name": "BaseBdev3", 00:39:47.228 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:47.228 "is_configured": true, 00:39:47.228 "data_offset": 2048, 00:39:47.228 "data_size": 63488 00:39:47.228 }, 00:39:47.228 { 00:39:47.228 "name": "BaseBdev4", 00:39:47.228 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:47.228 "is_configured": true, 00:39:47.228 "data_offset": 2048, 00:39:47.228 "data_size": 63488 00:39:47.228 } 00:39:47.228 ] 00:39:47.228 }' 00:39:47.228 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:47.228 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:47.794 [2024-12-09 05:30:34.539259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:47.794 "name": "Existed_Raid", 00:39:47.794 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:47.794 "strip_size_kb": 0, 00:39:47.794 "state": "configuring", 00:39:47.794 "raid_level": "raid1", 00:39:47.794 "superblock": true, 00:39:47.794 "num_base_bdevs": 4, 00:39:47.794 "num_base_bdevs_discovered": 3, 00:39:47.794 "num_base_bdevs_operational": 4, 00:39:47.794 "base_bdevs_list": [ 00:39:47.794 { 00:39:47.794 "name": null, 00:39:47.794 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:47.794 "is_configured": false, 00:39:47.794 "data_offset": 0, 00:39:47.794 "data_size": 63488 00:39:47.794 }, 00:39:47.794 { 00:39:47.794 "name": "BaseBdev2", 00:39:47.794 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:47.794 "is_configured": true, 00:39:47.794 "data_offset": 2048, 00:39:47.794 "data_size": 63488 00:39:47.794 }, 00:39:47.794 { 00:39:47.794 "name": "BaseBdev3", 00:39:47.794 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:47.794 "is_configured": true, 00:39:47.794 "data_offset": 2048, 00:39:47.794 "data_size": 63488 00:39:47.794 }, 00:39:47.794 { 00:39:47.794 "name": "BaseBdev4", 00:39:47.794 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:47.794 "is_configured": true, 00:39:47.794 "data_offset": 2048, 00:39:47.794 "data_size": 63488 00:39:47.794 } 00:39:47.794 ] 00:39:47.794 }' 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:47.794 05:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.362 [2024-12-09 05:30:35.199586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:39:48.362 NewBaseBdev 00:39:48.362 [2024-12-09 05:30:35.200200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:48.362 [2024-12-09 05:30:35.200276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:48.362 [2024-12-09 05:30:35.200607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:39:48.362 [2024-12-09 05:30:35.200800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:48.362 [2024-12-09 05:30:35.200834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:39:48.362 [2024-12-09 05:30:35.200995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:39:48.362 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.363 [ 00:39:48.363 { 00:39:48.363 "name": "NewBaseBdev", 00:39:48.363 "aliases": [ 00:39:48.363 "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d" 00:39:48.363 ], 00:39:48.363 "product_name": "Malloc disk", 00:39:48.363 "block_size": 512, 00:39:48.363 "num_blocks": 65536, 00:39:48.363 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:48.363 "assigned_rate_limits": { 00:39:48.363 "rw_ios_per_sec": 0, 00:39:48.363 "rw_mbytes_per_sec": 0, 00:39:48.363 "r_mbytes_per_sec": 0, 00:39:48.363 "w_mbytes_per_sec": 0 00:39:48.363 }, 00:39:48.363 "claimed": true, 00:39:48.363 "claim_type": "exclusive_write", 00:39:48.363 "zoned": false, 00:39:48.363 "supported_io_types": { 00:39:48.363 "read": true, 00:39:48.363 "write": true, 00:39:48.363 "unmap": true, 00:39:48.363 "flush": true, 00:39:48.363 "reset": true, 00:39:48.363 "nvme_admin": false, 00:39:48.363 "nvme_io": false, 00:39:48.363 "nvme_io_md": false, 00:39:48.363 "write_zeroes": true, 00:39:48.363 "zcopy": true, 00:39:48.363 "get_zone_info": false, 00:39:48.363 "zone_management": false, 00:39:48.363 "zone_append": false, 00:39:48.363 "compare": false, 00:39:48.363 "compare_and_write": false, 00:39:48.363 "abort": true, 00:39:48.363 "seek_hole": false, 00:39:48.363 "seek_data": false, 00:39:48.363 "copy": true, 00:39:48.363 "nvme_iov_md": false 00:39:48.363 }, 00:39:48.363 "memory_domains": [ 00:39:48.363 { 00:39:48.363 "dma_device_id": "system", 00:39:48.363 "dma_device_type": 1 00:39:48.363 }, 00:39:48.363 { 00:39:48.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.363 "dma_device_type": 2 00:39:48.363 } 00:39:48.363 ], 00:39:48.363 "driver_specific": {} 00:39:48.363 } 00:39:48.363 ] 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:48.363 "name": "Existed_Raid", 00:39:48.363 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:48.363 "strip_size_kb": 0, 00:39:48.363 "state": "online", 00:39:48.363 "raid_level": "raid1", 00:39:48.363 "superblock": true, 00:39:48.363 "num_base_bdevs": 4, 00:39:48.363 "num_base_bdevs_discovered": 4, 00:39:48.363 "num_base_bdevs_operational": 4, 00:39:48.363 "base_bdevs_list": [ 00:39:48.363 { 00:39:48.363 "name": "NewBaseBdev", 00:39:48.363 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:48.363 "is_configured": true, 00:39:48.363 "data_offset": 2048, 00:39:48.363 "data_size": 63488 00:39:48.363 }, 00:39:48.363 { 00:39:48.363 "name": "BaseBdev2", 00:39:48.363 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:48.363 "is_configured": true, 00:39:48.363 "data_offset": 2048, 00:39:48.363 "data_size": 63488 00:39:48.363 }, 00:39:48.363 { 00:39:48.363 "name": "BaseBdev3", 00:39:48.363 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:48.363 "is_configured": true, 00:39:48.363 "data_offset": 2048, 00:39:48.363 "data_size": 63488 00:39:48.363 }, 00:39:48.363 { 00:39:48.363 "name": "BaseBdev4", 00:39:48.363 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:48.363 "is_configured": true, 00:39:48.363 "data_offset": 2048, 00:39:48.363 "data_size": 63488 00:39:48.363 } 00:39:48.363 ] 00:39:48.363 }' 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:48.363 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.928 [2024-12-09 05:30:35.760317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.928 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:48.928 "name": "Existed_Raid", 00:39:48.928 "aliases": [ 00:39:48.928 "2015bfe9-d592-4ff4-acbd-a0bff78ff22b" 00:39:48.928 ], 00:39:48.928 "product_name": "Raid Volume", 00:39:48.928 "block_size": 512, 00:39:48.928 "num_blocks": 63488, 00:39:48.928 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:48.928 "assigned_rate_limits": { 00:39:48.928 "rw_ios_per_sec": 0, 00:39:48.928 "rw_mbytes_per_sec": 0, 00:39:48.928 "r_mbytes_per_sec": 0, 00:39:48.928 "w_mbytes_per_sec": 0 00:39:48.928 }, 00:39:48.928 "claimed": false, 00:39:48.928 "zoned": false, 00:39:48.928 "supported_io_types": { 00:39:48.928 "read": true, 00:39:48.928 "write": true, 00:39:48.928 "unmap": false, 00:39:48.928 "flush": false, 00:39:48.928 "reset": true, 00:39:48.928 "nvme_admin": false, 00:39:48.928 "nvme_io": false, 00:39:48.928 "nvme_io_md": false, 00:39:48.928 "write_zeroes": true, 00:39:48.928 "zcopy": false, 00:39:48.928 "get_zone_info": false, 00:39:48.928 "zone_management": false, 00:39:48.928 "zone_append": false, 00:39:48.928 "compare": false, 00:39:48.928 "compare_and_write": false, 00:39:48.928 "abort": false, 00:39:48.928 "seek_hole": false, 00:39:48.928 "seek_data": false, 00:39:48.928 "copy": false, 00:39:48.928 "nvme_iov_md": false 00:39:48.928 }, 00:39:48.928 "memory_domains": [ 00:39:48.928 { 00:39:48.928 "dma_device_id": "system", 00:39:48.928 "dma_device_type": 1 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.928 "dma_device_type": 2 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "system", 00:39:48.928 "dma_device_type": 1 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.928 "dma_device_type": 2 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "system", 00:39:48.928 "dma_device_type": 1 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.928 "dma_device_type": 2 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "system", 00:39:48.928 "dma_device_type": 1 00:39:48.928 }, 00:39:48.928 { 00:39:48.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.928 "dma_device_type": 2 00:39:48.928 } 00:39:48.928 ], 00:39:48.928 "driver_specific": { 00:39:48.928 "raid": { 00:39:48.928 "uuid": "2015bfe9-d592-4ff4-acbd-a0bff78ff22b", 00:39:48.928 "strip_size_kb": 0, 00:39:48.928 "state": "online", 00:39:48.928 "raid_level": "raid1", 00:39:48.928 "superblock": true, 00:39:48.928 "num_base_bdevs": 4, 00:39:48.928 "num_base_bdevs_discovered": 4, 00:39:48.928 "num_base_bdevs_operational": 4, 00:39:48.928 "base_bdevs_list": [ 00:39:48.928 { 00:39:48.928 "name": "NewBaseBdev", 00:39:48.928 "uuid": "5932ccf9-1e99-4e28-8e58-b1dcc8e3e26d", 00:39:48.929 "is_configured": true, 00:39:48.929 "data_offset": 2048, 00:39:48.929 "data_size": 63488 00:39:48.929 }, 00:39:48.929 { 00:39:48.929 "name": "BaseBdev2", 00:39:48.929 "uuid": "34edf619-26d6-4080-812f-2c99021b93c6", 00:39:48.929 "is_configured": true, 00:39:48.929 "data_offset": 2048, 00:39:48.929 "data_size": 63488 00:39:48.929 }, 00:39:48.929 { 00:39:48.929 "name": "BaseBdev3", 00:39:48.929 "uuid": "cfb337a4-0de9-4dcc-8696-e39977b065a7", 00:39:48.929 "is_configured": true, 00:39:48.929 "data_offset": 2048, 00:39:48.929 "data_size": 63488 00:39:48.929 }, 00:39:48.929 { 00:39:48.929 "name": "BaseBdev4", 00:39:48.929 "uuid": "f069e1c2-7eaf-4c02-b549-29f64af3823d", 00:39:48.929 "is_configured": true, 00:39:48.929 "data_offset": 2048, 00:39:48.929 "data_size": 63488 00:39:48.929 } 00:39:48.929 ] 00:39:48.929 } 00:39:48.929 } 00:39:48.929 }' 00:39:48.929 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:48.929 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:39:48.929 BaseBdev2 00:39:48.929 BaseBdev3 00:39:48.929 BaseBdev4' 00:39:48.929 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:49.188 05:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:49.188 [2024-12-09 05:30:36.135982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:49.188 [2024-12-09 05:30:36.136017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:49.188 [2024-12-09 05:30:36.136143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:49.188 [2024-12-09 05:30:36.136531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:49.188 [2024-12-09 05:30:36.136552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74090 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74090 ']' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74090 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.188 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74090 00:39:49.447 killing process with pid 74090 00:39:49.447 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:49.447 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:49.447 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74090' 00:39:49.447 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74090 00:39:49.447 [2024-12-09 05:30:36.176568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:49.447 05:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74090 00:39:49.707 [2024-12-09 05:30:36.496639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:51.094 05:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:39:51.094 00:39:51.094 real 0m13.182s 00:39:51.094 user 0m21.744s 00:39:51.094 sys 0m1.938s 00:39:51.094 05:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:51.094 05:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:51.094 ************************************ 00:39:51.094 END TEST raid_state_function_test_sb 00:39:51.094 ************************************ 00:39:51.094 05:30:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:39:51.094 05:30:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:51.094 05:30:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:51.094 05:30:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:51.094 ************************************ 00:39:51.094 START TEST raid_superblock_test 00:39:51.094 ************************************ 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74773 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74773 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74773 ']' 00:39:51.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.094 05:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.094 [2024-12-09 05:30:37.823033] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:39:51.094 [2024-12-09 05:30:37.824278] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74773 ] 00:39:51.094 [2024-12-09 05:30:38.014401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.353 [2024-12-09 05:30:38.153609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.611 [2024-12-09 05:30:38.376258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:51.611 [2024-12-09 05:30:38.376341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.869 malloc1 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.869 [2024-12-09 05:30:38.824730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:51.869 [2024-12-09 05:30:38.824988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:51.869 [2024-12-09 05:30:38.825084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:51.869 [2024-12-09 05:30:38.825407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:51.869 [2024-12-09 05:30:38.828582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:51.869 [2024-12-09 05:30:38.828815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:51.869 pt1 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.869 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.127 malloc2 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.127 [2024-12-09 05:30:38.887354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:52.127 [2024-12-09 05:30:38.887601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:52.127 [2024-12-09 05:30:38.887649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:52.127 [2024-12-09 05:30:38.887665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:52.127 [2024-12-09 05:30:38.890828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:52.127 [2024-12-09 05:30:38.890906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:52.127 pt2 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.127 malloc3 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.127 [2024-12-09 05:30:38.950265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:52.127 [2024-12-09 05:30:38.950374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:52.127 [2024-12-09 05:30:38.950409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:52.127 [2024-12-09 05:30:38.950424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:52.127 [2024-12-09 05:30:38.953317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:52.127 [2024-12-09 05:30:38.953360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:52.127 pt3 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:52.127 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.128 malloc4 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:52.128 05:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.128 [2024-12-09 05:30:39.007423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:52.128 [2024-12-09 05:30:39.007643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:52.128 [2024-12-09 05:30:39.007720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:52.128 [2024-12-09 05:30:39.007742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:52.128 [2024-12-09 05:30:39.011026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:52.128 [2024-12-09 05:30:39.011102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:52.128 pt4 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.128 [2024-12-09 05:30:39.015482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:52.128 [2024-12-09 05:30:39.018425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:52.128 [2024-12-09 05:30:39.018679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:52.128 [2024-12-09 05:30:39.018867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:52.128 [2024-12-09 05:30:39.019275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:52.128 [2024-12-09 05:30:39.019459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:52.128 [2024-12-09 05:30:39.019919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:52.128 [2024-12-09 05:30:39.020233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:52.128 [2024-12-09 05:30:39.020259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:52.128 [2024-12-09 05:30:39.020568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:52.128 "name": "raid_bdev1", 00:39:52.128 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:52.128 "strip_size_kb": 0, 00:39:52.128 "state": "online", 00:39:52.128 "raid_level": "raid1", 00:39:52.128 "superblock": true, 00:39:52.128 "num_base_bdevs": 4, 00:39:52.128 "num_base_bdevs_discovered": 4, 00:39:52.128 "num_base_bdevs_operational": 4, 00:39:52.128 "base_bdevs_list": [ 00:39:52.128 { 00:39:52.128 "name": "pt1", 00:39:52.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:52.128 "is_configured": true, 00:39:52.128 "data_offset": 2048, 00:39:52.128 "data_size": 63488 00:39:52.128 }, 00:39:52.128 { 00:39:52.128 "name": "pt2", 00:39:52.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:52.128 "is_configured": true, 00:39:52.128 "data_offset": 2048, 00:39:52.128 "data_size": 63488 00:39:52.128 }, 00:39:52.128 { 00:39:52.128 "name": "pt3", 00:39:52.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:52.128 "is_configured": true, 00:39:52.128 "data_offset": 2048, 00:39:52.128 "data_size": 63488 00:39:52.128 }, 00:39:52.128 { 00:39:52.128 "name": "pt4", 00:39:52.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:52.128 "is_configured": true, 00:39:52.128 "data_offset": 2048, 00:39:52.128 "data_size": 63488 00:39:52.128 } 00:39:52.128 ] 00:39:52.128 }' 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:52.128 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.694 [2024-12-09 05:30:39.561175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.694 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:52.694 "name": "raid_bdev1", 00:39:52.694 "aliases": [ 00:39:52.694 "cde31e50-895b-4887-8d2f-73113ebc3689" 00:39:52.694 ], 00:39:52.694 "product_name": "Raid Volume", 00:39:52.694 "block_size": 512, 00:39:52.694 "num_blocks": 63488, 00:39:52.694 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:52.694 "assigned_rate_limits": { 00:39:52.694 "rw_ios_per_sec": 0, 00:39:52.694 "rw_mbytes_per_sec": 0, 00:39:52.694 "r_mbytes_per_sec": 0, 00:39:52.694 "w_mbytes_per_sec": 0 00:39:52.694 }, 00:39:52.694 "claimed": false, 00:39:52.694 "zoned": false, 00:39:52.694 "supported_io_types": { 00:39:52.694 "read": true, 00:39:52.694 "write": true, 00:39:52.694 "unmap": false, 00:39:52.694 "flush": false, 00:39:52.694 "reset": true, 00:39:52.694 "nvme_admin": false, 00:39:52.694 "nvme_io": false, 00:39:52.694 "nvme_io_md": false, 00:39:52.694 "write_zeroes": true, 00:39:52.694 "zcopy": false, 00:39:52.694 "get_zone_info": false, 00:39:52.694 "zone_management": false, 00:39:52.694 "zone_append": false, 00:39:52.694 "compare": false, 00:39:52.694 "compare_and_write": false, 00:39:52.694 "abort": false, 00:39:52.694 "seek_hole": false, 00:39:52.695 "seek_data": false, 00:39:52.695 "copy": false, 00:39:52.695 "nvme_iov_md": false 00:39:52.695 }, 00:39:52.695 "memory_domains": [ 00:39:52.695 { 00:39:52.695 "dma_device_id": "system", 00:39:52.695 "dma_device_type": 1 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:52.695 "dma_device_type": 2 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "system", 00:39:52.695 "dma_device_type": 1 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:52.695 "dma_device_type": 2 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "system", 00:39:52.695 "dma_device_type": 1 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:52.695 "dma_device_type": 2 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "system", 00:39:52.695 "dma_device_type": 1 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:52.695 "dma_device_type": 2 00:39:52.695 } 00:39:52.695 ], 00:39:52.695 "driver_specific": { 00:39:52.695 "raid": { 00:39:52.695 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:52.695 "strip_size_kb": 0, 00:39:52.695 "state": "online", 00:39:52.695 "raid_level": "raid1", 00:39:52.695 "superblock": true, 00:39:52.695 "num_base_bdevs": 4, 00:39:52.695 "num_base_bdevs_discovered": 4, 00:39:52.695 "num_base_bdevs_operational": 4, 00:39:52.695 "base_bdevs_list": [ 00:39:52.695 { 00:39:52.695 "name": "pt1", 00:39:52.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:52.695 "is_configured": true, 00:39:52.695 "data_offset": 2048, 00:39:52.695 "data_size": 63488 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "name": "pt2", 00:39:52.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:52.695 "is_configured": true, 00:39:52.695 "data_offset": 2048, 00:39:52.695 "data_size": 63488 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "name": "pt3", 00:39:52.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:52.695 "is_configured": true, 00:39:52.695 "data_offset": 2048, 00:39:52.695 "data_size": 63488 00:39:52.695 }, 00:39:52.695 { 00:39:52.695 "name": "pt4", 00:39:52.695 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:52.695 "is_configured": true, 00:39:52.695 "data_offset": 2048, 00:39:52.695 "data_size": 63488 00:39:52.695 } 00:39:52.695 ] 00:39:52.695 } 00:39:52.695 } 00:39:52.695 }' 00:39:52.695 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:52.695 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:52.695 pt2 00:39:52.695 pt3 00:39:52.695 pt4' 00:39:52.695 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.953 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:39:53.212 [2024-12-09 05:30:39.937137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cde31e50-895b-4887-8d2f-73113ebc3689 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cde31e50-895b-4887-8d2f-73113ebc3689 ']' 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.212 [2024-12-09 05:30:39.992786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:53.212 [2024-12-09 05:30:39.992950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:53.212 [2024-12-09 05:30:39.993173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:53.212 [2024-12-09 05:30:39.993418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:53.212 [2024-12-09 05:30:39.993549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.212 05:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.212 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.213 [2024-12-09 05:30:40.156920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:53.213 [2024-12-09 05:30:40.160157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:53.213 [2024-12-09 05:30:40.160235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:39:53.213 [2024-12-09 05:30:40.160292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:39:53.213 [2024-12-09 05:30:40.160369] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:53.213 [2024-12-09 05:30:40.160478] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:53.213 [2024-12-09 05:30:40.160511] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:39:53.213 [2024-12-09 05:30:40.160573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:39:53.213 [2024-12-09 05:30:40.160611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:53.213 [2024-12-09 05:30:40.160628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:39:53.213 request: 00:39:53.213 { 00:39:53.213 "name": "raid_bdev1", 00:39:53.213 "raid_level": "raid1", 00:39:53.213 "base_bdevs": [ 00:39:53.213 "malloc1", 00:39:53.213 "malloc2", 00:39:53.213 "malloc3", 00:39:53.213 "malloc4" 00:39:53.213 ], 00:39:53.213 "superblock": false, 00:39:53.213 "method": "bdev_raid_create", 00:39:53.213 "req_id": 1 00:39:53.213 } 00:39:53.213 Got JSON-RPC error response 00:39:53.213 response: 00:39:53.213 { 00:39:53.213 "code": -17, 00:39:53.213 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:53.213 } 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.213 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.471 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:39:53.471 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:39:53.471 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:53.471 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.471 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.471 [2024-12-09 05:30:40.224883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:53.471 [2024-12-09 05:30:40.225120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:53.471 [2024-12-09 05:30:40.225193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:53.471 [2024-12-09 05:30:40.225312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:53.471 [2024-12-09 05:30:40.228603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:53.472 [2024-12-09 05:30:40.228824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:53.472 [2024-12-09 05:30:40.229068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:53.472 [2024-12-09 05:30:40.229284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:53.472 pt1 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:53.472 "name": "raid_bdev1", 00:39:53.472 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:53.472 "strip_size_kb": 0, 00:39:53.472 "state": "configuring", 00:39:53.472 "raid_level": "raid1", 00:39:53.472 "superblock": true, 00:39:53.472 "num_base_bdevs": 4, 00:39:53.472 "num_base_bdevs_discovered": 1, 00:39:53.472 "num_base_bdevs_operational": 4, 00:39:53.472 "base_bdevs_list": [ 00:39:53.472 { 00:39:53.472 "name": "pt1", 00:39:53.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:53.472 "is_configured": true, 00:39:53.472 "data_offset": 2048, 00:39:53.472 "data_size": 63488 00:39:53.472 }, 00:39:53.472 { 00:39:53.472 "name": null, 00:39:53.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:53.472 "is_configured": false, 00:39:53.472 "data_offset": 2048, 00:39:53.472 "data_size": 63488 00:39:53.472 }, 00:39:53.472 { 00:39:53.472 "name": null, 00:39:53.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:53.472 "is_configured": false, 00:39:53.472 "data_offset": 2048, 00:39:53.472 "data_size": 63488 00:39:53.472 }, 00:39:53.472 { 00:39:53.472 "name": null, 00:39:53.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:53.472 "is_configured": false, 00:39:53.472 "data_offset": 2048, 00:39:53.472 "data_size": 63488 00:39:53.472 } 00:39:53.472 ] 00:39:53.472 }' 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:53.472 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.038 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:39:54.038 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:54.038 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.038 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.038 [2024-12-09 05:30:40.769538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:54.038 [2024-12-09 05:30:40.769990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:54.039 [2024-12-09 05:30:40.770035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:54.039 [2024-12-09 05:30:40.770055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:54.039 [2024-12-09 05:30:40.770723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:54.039 [2024-12-09 05:30:40.770758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:54.039 [2024-12-09 05:30:40.770956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:54.039 [2024-12-09 05:30:40.771005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:54.039 pt2 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.039 [2024-12-09 05:30:40.777589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:54.039 "name": "raid_bdev1", 00:39:54.039 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:54.039 "strip_size_kb": 0, 00:39:54.039 "state": "configuring", 00:39:54.039 "raid_level": "raid1", 00:39:54.039 "superblock": true, 00:39:54.039 "num_base_bdevs": 4, 00:39:54.039 "num_base_bdevs_discovered": 1, 00:39:54.039 "num_base_bdevs_operational": 4, 00:39:54.039 "base_bdevs_list": [ 00:39:54.039 { 00:39:54.039 "name": "pt1", 00:39:54.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:54.039 "is_configured": true, 00:39:54.039 "data_offset": 2048, 00:39:54.039 "data_size": 63488 00:39:54.039 }, 00:39:54.039 { 00:39:54.039 "name": null, 00:39:54.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:54.039 "is_configured": false, 00:39:54.039 "data_offset": 0, 00:39:54.039 "data_size": 63488 00:39:54.039 }, 00:39:54.039 { 00:39:54.039 "name": null, 00:39:54.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:54.039 "is_configured": false, 00:39:54.039 "data_offset": 2048, 00:39:54.039 "data_size": 63488 00:39:54.039 }, 00:39:54.039 { 00:39:54.039 "name": null, 00:39:54.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:54.039 "is_configured": false, 00:39:54.039 "data_offset": 2048, 00:39:54.039 "data_size": 63488 00:39:54.039 } 00:39:54.039 ] 00:39:54.039 }' 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:54.039 05:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.607 [2024-12-09 05:30:41.317881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:54.607 [2024-12-09 05:30:41.318142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:54.607 [2024-12-09 05:30:41.318220] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:39:54.607 [2024-12-09 05:30:41.318243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:54.607 [2024-12-09 05:30:41.319035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:54.607 [2024-12-09 05:30:41.319106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:54.607 [2024-12-09 05:30:41.319234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:54.607 [2024-12-09 05:30:41.319267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:54.607 pt2 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.607 [2024-12-09 05:30:41.325778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:54.607 [2024-12-09 05:30:41.325874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:54.607 [2024-12-09 05:30:41.325899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:39:54.607 [2024-12-09 05:30:41.325912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:54.607 [2024-12-09 05:30:41.326379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:54.607 [2024-12-09 05:30:41.326411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:54.607 [2024-12-09 05:30:41.326493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:39:54.607 [2024-12-09 05:30:41.326522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:54.607 pt3 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.607 [2024-12-09 05:30:41.333792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:54.607 [2024-12-09 05:30:41.333992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:54.607 [2024-12-09 05:30:41.334150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:39:54.607 [2024-12-09 05:30:41.334275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:54.607 [2024-12-09 05:30:41.334887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:54.607 [2024-12-09 05:30:41.335080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:54.607 [2024-12-09 05:30:41.335295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:39:54.607 [2024-12-09 05:30:41.335471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:54.607 [2024-12-09 05:30:41.335839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:54.607 [2024-12-09 05:30:41.335984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:54.607 [2024-12-09 05:30:41.336560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:54.607 [2024-12-09 05:30:41.336812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:54.607 [2024-12-09 05:30:41.336835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:39:54.607 [2024-12-09 05:30:41.337095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:54.607 pt4 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:54.607 "name": "raid_bdev1", 00:39:54.607 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:54.607 "strip_size_kb": 0, 00:39:54.607 "state": "online", 00:39:54.607 "raid_level": "raid1", 00:39:54.607 "superblock": true, 00:39:54.607 "num_base_bdevs": 4, 00:39:54.607 "num_base_bdevs_discovered": 4, 00:39:54.607 "num_base_bdevs_operational": 4, 00:39:54.607 "base_bdevs_list": [ 00:39:54.607 { 00:39:54.607 "name": "pt1", 00:39:54.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:54.607 "is_configured": true, 00:39:54.607 "data_offset": 2048, 00:39:54.607 "data_size": 63488 00:39:54.607 }, 00:39:54.607 { 00:39:54.607 "name": "pt2", 00:39:54.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:54.607 "is_configured": true, 00:39:54.607 "data_offset": 2048, 00:39:54.607 "data_size": 63488 00:39:54.607 }, 00:39:54.607 { 00:39:54.607 "name": "pt3", 00:39:54.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:54.607 "is_configured": true, 00:39:54.607 "data_offset": 2048, 00:39:54.607 "data_size": 63488 00:39:54.607 }, 00:39:54.607 { 00:39:54.607 "name": "pt4", 00:39:54.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:54.607 "is_configured": true, 00:39:54.607 "data_offset": 2048, 00:39:54.607 "data_size": 63488 00:39:54.607 } 00:39:54.607 ] 00:39:54.607 }' 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:54.607 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:55.175 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:55.176 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.176 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:55.176 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.176 [2024-12-09 05:30:41.918492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:55.176 05:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.176 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:55.176 "name": "raid_bdev1", 00:39:55.176 "aliases": [ 00:39:55.176 "cde31e50-895b-4887-8d2f-73113ebc3689" 00:39:55.176 ], 00:39:55.176 "product_name": "Raid Volume", 00:39:55.176 "block_size": 512, 00:39:55.176 "num_blocks": 63488, 00:39:55.176 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:55.176 "assigned_rate_limits": { 00:39:55.176 "rw_ios_per_sec": 0, 00:39:55.176 "rw_mbytes_per_sec": 0, 00:39:55.176 "r_mbytes_per_sec": 0, 00:39:55.176 "w_mbytes_per_sec": 0 00:39:55.176 }, 00:39:55.176 "claimed": false, 00:39:55.176 "zoned": false, 00:39:55.176 "supported_io_types": { 00:39:55.176 "read": true, 00:39:55.176 "write": true, 00:39:55.176 "unmap": false, 00:39:55.176 "flush": false, 00:39:55.176 "reset": true, 00:39:55.176 "nvme_admin": false, 00:39:55.176 "nvme_io": false, 00:39:55.176 "nvme_io_md": false, 00:39:55.176 "write_zeroes": true, 00:39:55.176 "zcopy": false, 00:39:55.176 "get_zone_info": false, 00:39:55.176 "zone_management": false, 00:39:55.176 "zone_append": false, 00:39:55.176 "compare": false, 00:39:55.176 "compare_and_write": false, 00:39:55.176 "abort": false, 00:39:55.176 "seek_hole": false, 00:39:55.176 "seek_data": false, 00:39:55.176 "copy": false, 00:39:55.176 "nvme_iov_md": false 00:39:55.176 }, 00:39:55.176 "memory_domains": [ 00:39:55.176 { 00:39:55.176 "dma_device_id": "system", 00:39:55.176 "dma_device_type": 1 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:55.176 "dma_device_type": 2 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "system", 00:39:55.176 "dma_device_type": 1 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:55.176 "dma_device_type": 2 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "system", 00:39:55.176 "dma_device_type": 1 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:55.176 "dma_device_type": 2 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "system", 00:39:55.176 "dma_device_type": 1 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:55.176 "dma_device_type": 2 00:39:55.176 } 00:39:55.176 ], 00:39:55.176 "driver_specific": { 00:39:55.176 "raid": { 00:39:55.176 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:55.176 "strip_size_kb": 0, 00:39:55.176 "state": "online", 00:39:55.176 "raid_level": "raid1", 00:39:55.176 "superblock": true, 00:39:55.176 "num_base_bdevs": 4, 00:39:55.176 "num_base_bdevs_discovered": 4, 00:39:55.176 "num_base_bdevs_operational": 4, 00:39:55.176 "base_bdevs_list": [ 00:39:55.176 { 00:39:55.176 "name": "pt1", 00:39:55.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:55.176 "is_configured": true, 00:39:55.176 "data_offset": 2048, 00:39:55.176 "data_size": 63488 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "name": "pt2", 00:39:55.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:55.176 "is_configured": true, 00:39:55.176 "data_offset": 2048, 00:39:55.176 "data_size": 63488 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "name": "pt3", 00:39:55.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:55.176 "is_configured": true, 00:39:55.176 "data_offset": 2048, 00:39:55.176 "data_size": 63488 00:39:55.176 }, 00:39:55.176 { 00:39:55.176 "name": "pt4", 00:39:55.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:55.176 "is_configured": true, 00:39:55.176 "data_offset": 2048, 00:39:55.176 "data_size": 63488 00:39:55.176 } 00:39:55.176 ] 00:39:55.176 } 00:39:55.176 } 00:39:55.176 }' 00:39:55.176 05:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:55.176 pt2 00:39:55.176 pt3 00:39:55.176 pt4' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:55.176 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.435 [2024-12-09 05:30:42.282454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cde31e50-895b-4887-8d2f-73113ebc3689 '!=' cde31e50-895b-4887-8d2f-73113ebc3689 ']' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.435 [2024-12-09 05:30:42.330196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:55.435 "name": "raid_bdev1", 00:39:55.435 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:55.435 "strip_size_kb": 0, 00:39:55.435 "state": "online", 00:39:55.435 "raid_level": "raid1", 00:39:55.435 "superblock": true, 00:39:55.435 "num_base_bdevs": 4, 00:39:55.435 "num_base_bdevs_discovered": 3, 00:39:55.435 "num_base_bdevs_operational": 3, 00:39:55.435 "base_bdevs_list": [ 00:39:55.435 { 00:39:55.435 "name": null, 00:39:55.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:55.435 "is_configured": false, 00:39:55.435 "data_offset": 0, 00:39:55.435 "data_size": 63488 00:39:55.435 }, 00:39:55.435 { 00:39:55.435 "name": "pt2", 00:39:55.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:55.435 "is_configured": true, 00:39:55.435 "data_offset": 2048, 00:39:55.435 "data_size": 63488 00:39:55.435 }, 00:39:55.435 { 00:39:55.435 "name": "pt3", 00:39:55.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:55.435 "is_configured": true, 00:39:55.435 "data_offset": 2048, 00:39:55.435 "data_size": 63488 00:39:55.435 }, 00:39:55.435 { 00:39:55.435 "name": "pt4", 00:39:55.435 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:55.435 "is_configured": true, 00:39:55.435 "data_offset": 2048, 00:39:55.435 "data_size": 63488 00:39:55.435 } 00:39:55.435 ] 00:39:55.435 }' 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:55.435 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 [2024-12-09 05:30:42.858398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:56.010 [2024-12-09 05:30:42.858441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:56.010 [2024-12-09 05:30:42.858559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:56.010 [2024-12-09 05:30:42.858698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:56.010 [2024-12-09 05:30:42.858715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 [2024-12-09 05:30:42.942404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:56.010 [2024-12-09 05:30:42.942610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:56.010 [2024-12-09 05:30:42.942822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:39:56.010 [2024-12-09 05:30:42.942978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:56.010 [2024-12-09 05:30:42.946289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:56.010 pt2 00:39:56.010 [2024-12-09 05:30:42.946484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:56.010 [2024-12-09 05:30:42.946617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:56.010 [2024-12-09 05:30:42.946699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:56.010 05:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.286 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:56.286 "name": "raid_bdev1", 00:39:56.286 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:56.286 "strip_size_kb": 0, 00:39:56.286 "state": "configuring", 00:39:56.286 "raid_level": "raid1", 00:39:56.286 "superblock": true, 00:39:56.286 "num_base_bdevs": 4, 00:39:56.286 "num_base_bdevs_discovered": 1, 00:39:56.286 "num_base_bdevs_operational": 3, 00:39:56.286 "base_bdevs_list": [ 00:39:56.286 { 00:39:56.286 "name": null, 00:39:56.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.286 "is_configured": false, 00:39:56.286 "data_offset": 2048, 00:39:56.286 "data_size": 63488 00:39:56.286 }, 00:39:56.286 { 00:39:56.286 "name": "pt2", 00:39:56.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:56.286 "is_configured": true, 00:39:56.286 "data_offset": 2048, 00:39:56.286 "data_size": 63488 00:39:56.286 }, 00:39:56.286 { 00:39:56.286 "name": null, 00:39:56.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:56.286 "is_configured": false, 00:39:56.286 "data_offset": 2048, 00:39:56.286 "data_size": 63488 00:39:56.286 }, 00:39:56.286 { 00:39:56.286 "name": null, 00:39:56.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:56.286 "is_configured": false, 00:39:56.286 "data_offset": 2048, 00:39:56.286 "data_size": 63488 00:39:56.286 } 00:39:56.286 ] 00:39:56.286 }' 00:39:56.286 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:56.286 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.543 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:39:56.543 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:56.543 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:56.543 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.543 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.543 [2024-12-09 05:30:43.475091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:56.543 [2024-12-09 05:30:43.475411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:56.543 [2024-12-09 05:30:43.475491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:39:56.543 [2024-12-09 05:30:43.475744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:56.543 [2024-12-09 05:30:43.476531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:56.544 [2024-12-09 05:30:43.476569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:56.544 [2024-12-09 05:30:43.476739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:39:56.544 [2024-12-09 05:30:43.476794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:56.544 pt3 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.544 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.801 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:56.801 "name": "raid_bdev1", 00:39:56.801 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:56.801 "strip_size_kb": 0, 00:39:56.801 "state": "configuring", 00:39:56.801 "raid_level": "raid1", 00:39:56.801 "superblock": true, 00:39:56.801 "num_base_bdevs": 4, 00:39:56.801 "num_base_bdevs_discovered": 2, 00:39:56.801 "num_base_bdevs_operational": 3, 00:39:56.801 "base_bdevs_list": [ 00:39:56.801 { 00:39:56.801 "name": null, 00:39:56.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.801 "is_configured": false, 00:39:56.801 "data_offset": 2048, 00:39:56.801 "data_size": 63488 00:39:56.801 }, 00:39:56.801 { 00:39:56.801 "name": "pt2", 00:39:56.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:56.801 "is_configured": true, 00:39:56.801 "data_offset": 2048, 00:39:56.801 "data_size": 63488 00:39:56.801 }, 00:39:56.801 { 00:39:56.801 "name": "pt3", 00:39:56.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:56.801 "is_configured": true, 00:39:56.801 "data_offset": 2048, 00:39:56.801 "data_size": 63488 00:39:56.801 }, 00:39:56.801 { 00:39:56.801 "name": null, 00:39:56.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:56.801 "is_configured": false, 00:39:56.801 "data_offset": 2048, 00:39:56.801 "data_size": 63488 00:39:56.801 } 00:39:56.801 ] 00:39:56.801 }' 00:39:56.801 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:56.801 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.059 [2024-12-09 05:30:43.995321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:57.059 [2024-12-09 05:30:43.995663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.059 [2024-12-09 05:30:43.995713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:39:57.059 [2024-12-09 05:30:43.995729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.059 [2024-12-09 05:30:43.996459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.059 [2024-12-09 05:30:43.996483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:57.059 [2024-12-09 05:30:43.996665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:39:57.059 [2024-12-09 05:30:43.996702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:57.059 [2024-12-09 05:30:43.996883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:57.059 [2024-12-09 05:30:43.996919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:57.059 [2024-12-09 05:30:43.997289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:39:57.059 [2024-12-09 05:30:43.997524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:57.059 [2024-12-09 05:30:43.997567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:39:57.059 [2024-12-09 05:30:43.997757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:57.059 pt4 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:57.059 05:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.059 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.317 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:57.317 "name": "raid_bdev1", 00:39:57.317 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:57.317 "strip_size_kb": 0, 00:39:57.317 "state": "online", 00:39:57.317 "raid_level": "raid1", 00:39:57.317 "superblock": true, 00:39:57.317 "num_base_bdevs": 4, 00:39:57.317 "num_base_bdevs_discovered": 3, 00:39:57.317 "num_base_bdevs_operational": 3, 00:39:57.317 "base_bdevs_list": [ 00:39:57.317 { 00:39:57.317 "name": null, 00:39:57.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:57.317 "is_configured": false, 00:39:57.317 "data_offset": 2048, 00:39:57.317 "data_size": 63488 00:39:57.317 }, 00:39:57.317 { 00:39:57.317 "name": "pt2", 00:39:57.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:57.317 "is_configured": true, 00:39:57.317 "data_offset": 2048, 00:39:57.317 "data_size": 63488 00:39:57.317 }, 00:39:57.317 { 00:39:57.317 "name": "pt3", 00:39:57.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:57.317 "is_configured": true, 00:39:57.317 "data_offset": 2048, 00:39:57.317 "data_size": 63488 00:39:57.317 }, 00:39:57.317 { 00:39:57.317 "name": "pt4", 00:39:57.317 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:57.317 "is_configured": true, 00:39:57.317 "data_offset": 2048, 00:39:57.317 "data_size": 63488 00:39:57.317 } 00:39:57.317 ] 00:39:57.317 }' 00:39:57.317 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:57.317 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.575 [2024-12-09 05:30:44.535668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:57.575 [2024-12-09 05:30:44.535708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:57.575 [2024-12-09 05:30:44.535846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:57.575 [2024-12-09 05:30:44.535952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:57.575 [2024-12-09 05:30:44.535974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.575 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.834 [2024-12-09 05:30:44.607685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:57.834 [2024-12-09 05:30:44.607756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.834 [2024-12-09 05:30:44.607800] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:39:57.834 [2024-12-09 05:30:44.607820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.834 [2024-12-09 05:30:44.610947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.834 [2024-12-09 05:30:44.611030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:57.834 [2024-12-09 05:30:44.611138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:57.834 [2024-12-09 05:30:44.611206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:57.834 [2024-12-09 05:30:44.611425] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:57.834 [2024-12-09 05:30:44.611449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:57.834 [2024-12-09 05:30:44.611468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:39:57.834 [2024-12-09 05:30:44.611540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:57.834 [2024-12-09 05:30:44.611691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:57.834 pt1 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.834 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:57.834 "name": "raid_bdev1", 00:39:57.834 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:57.834 "strip_size_kb": 0, 00:39:57.834 "state": "configuring", 00:39:57.834 "raid_level": "raid1", 00:39:57.834 "superblock": true, 00:39:57.834 "num_base_bdevs": 4, 00:39:57.834 "num_base_bdevs_discovered": 2, 00:39:57.834 "num_base_bdevs_operational": 3, 00:39:57.834 "base_bdevs_list": [ 00:39:57.834 { 00:39:57.834 "name": null, 00:39:57.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:57.834 "is_configured": false, 00:39:57.834 "data_offset": 2048, 00:39:57.834 "data_size": 63488 00:39:57.834 }, 00:39:57.834 { 00:39:57.834 "name": "pt2", 00:39:57.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:57.834 "is_configured": true, 00:39:57.834 "data_offset": 2048, 00:39:57.834 "data_size": 63488 00:39:57.834 }, 00:39:57.834 { 00:39:57.834 "name": "pt3", 00:39:57.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:57.835 "is_configured": true, 00:39:57.835 "data_offset": 2048, 00:39:57.835 "data_size": 63488 00:39:57.835 }, 00:39:57.835 { 00:39:57.835 "name": null, 00:39:57.835 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:57.835 "is_configured": false, 00:39:57.835 "data_offset": 2048, 00:39:57.835 "data_size": 63488 00:39:57.835 } 00:39:57.835 ] 00:39:57.835 }' 00:39:57.835 05:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:57.835 05:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.399 [2024-12-09 05:30:45.184072] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:58.399 [2024-12-09 05:30:45.184452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:58.399 [2024-12-09 05:30:45.184499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:39:58.399 [2024-12-09 05:30:45.184516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:58.399 [2024-12-09 05:30:45.185353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:58.399 pt4 00:39:58.399 [2024-12-09 05:30:45.185565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:58.399 [2024-12-09 05:30:45.185701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:39:58.399 [2024-12-09 05:30:45.185744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:58.399 [2024-12-09 05:30:45.185986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:39:58.399 [2024-12-09 05:30:45.186003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:58.399 [2024-12-09 05:30:45.186410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:39:58.399 [2024-12-09 05:30:45.186685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:39:58.399 [2024-12-09 05:30:45.186705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:39:58.399 [2024-12-09 05:30:45.186927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:58.399 "name": "raid_bdev1", 00:39:58.399 "uuid": "cde31e50-895b-4887-8d2f-73113ebc3689", 00:39:58.399 "strip_size_kb": 0, 00:39:58.399 "state": "online", 00:39:58.399 "raid_level": "raid1", 00:39:58.399 "superblock": true, 00:39:58.399 "num_base_bdevs": 4, 00:39:58.399 "num_base_bdevs_discovered": 3, 00:39:58.399 "num_base_bdevs_operational": 3, 00:39:58.399 "base_bdevs_list": [ 00:39:58.399 { 00:39:58.399 "name": null, 00:39:58.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:58.399 "is_configured": false, 00:39:58.399 "data_offset": 2048, 00:39:58.399 "data_size": 63488 00:39:58.399 }, 00:39:58.399 { 00:39:58.399 "name": "pt2", 00:39:58.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:58.399 "is_configured": true, 00:39:58.399 "data_offset": 2048, 00:39:58.399 "data_size": 63488 00:39:58.399 }, 00:39:58.399 { 00:39:58.399 "name": "pt3", 00:39:58.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:58.399 "is_configured": true, 00:39:58.399 "data_offset": 2048, 00:39:58.399 "data_size": 63488 00:39:58.399 }, 00:39:58.399 { 00:39:58.399 "name": "pt4", 00:39:58.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:39:58.399 "is_configured": true, 00:39:58.399 "data_offset": 2048, 00:39:58.399 "data_size": 63488 00:39:58.399 } 00:39:58.399 ] 00:39:58.399 }' 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:58.399 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:39:58.962 [2024-12-09 05:30:45.776535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cde31e50-895b-4887-8d2f-73113ebc3689 '!=' cde31e50-895b-4887-8d2f-73113ebc3689 ']' 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74773 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74773 ']' 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74773 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74773 00:39:58.962 killing process with pid 74773 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74773' 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74773 00:39:58.962 [2024-12-09 05:30:45.856108] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:58.962 05:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74773 00:39:58.962 [2024-12-09 05:30:45.856256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:58.962 [2024-12-09 05:30:45.856400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:58.962 [2024-12-09 05:30:45.856435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:39:59.527 [2024-12-09 05:30:46.196760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:00.904 05:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:40:00.904 00:40:00.904 real 0m9.748s 00:40:00.904 user 0m15.849s 00:40:00.904 sys 0m1.461s 00:40:00.904 05:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:00.904 ************************************ 00:40:00.904 END TEST raid_superblock_test 00:40:00.904 ************************************ 00:40:00.904 05:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:00.904 05:30:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:40:00.904 05:30:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:00.904 05:30:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:00.904 05:30:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:00.904 ************************************ 00:40:00.904 START TEST raid_read_error_test 00:40:00.904 ************************************ 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IEkaBPhqwg 00:40:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75276 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75276 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75276 ']' 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.904 05:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:00.904 [2024-12-09 05:30:47.641066] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:00.904 [2024-12-09 05:30:47.641507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75276 ] 00:40:00.905 [2024-12-09 05:30:47.843441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.163 [2024-12-09 05:30:48.029703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.422 [2024-12-09 05:30:48.262051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:01.422 [2024-12-09 05:30:48.262347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 BaseBdev1_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 true 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 [2024-12-09 05:30:48.743253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:40:01.988 [2024-12-09 05:30:48.743531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:01.988 [2024-12-09 05:30:48.743584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:40:01.988 [2024-12-09 05:30:48.743604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:01.988 [2024-12-09 05:30:48.746887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:01.988 [2024-12-09 05:30:48.746936] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:01.988 BaseBdev1 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 BaseBdev2_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 true 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 [2024-12-09 05:30:48.809494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:40:01.988 [2024-12-09 05:30:48.809733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:01.988 [2024-12-09 05:30:48.809819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:40:01.988 [2024-12-09 05:30:48.810024] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:01.988 [2024-12-09 05:30:48.813365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:01.988 [2024-12-09 05:30:48.813548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:01.988 BaseBdev2 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 BaseBdev3_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 true 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.988 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.988 [2024-12-09 05:30:48.887988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:40:01.988 [2024-12-09 05:30:48.888194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:01.988 [2024-12-09 05:30:48.888232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:40:01.988 [2024-12-09 05:30:48.888252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:01.988 [2024-12-09 05:30:48.891257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:01.988 BaseBdev3 00:40:01.988 [2024-12-09 05:30:48.891494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.989 BaseBdev4_malloc 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.989 true 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.989 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.989 [2024-12-09 05:30:48.954384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:40:01.989 [2024-12-09 05:30:48.954449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:01.989 [2024-12-09 05:30:48.954478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:01.989 [2024-12-09 05:30:48.954495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:01.989 [2024-12-09 05:30:48.957579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:01.989 [2024-12-09 05:30:48.957634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:40:02.247 BaseBdev4 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:02.247 [2024-12-09 05:30:48.962534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:02.247 [2024-12-09 05:30:48.965321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:02.247 [2024-12-09 05:30:48.965549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:02.247 [2024-12-09 05:30:48.965788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:40:02.247 [2024-12-09 05:30:48.966236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:40:02.247 [2024-12-09 05:30:48.966264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:40:02.247 [2024-12-09 05:30:48.966596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:40:02.247 [2024-12-09 05:30:48.966890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:40:02.247 [2024-12-09 05:30:48.966923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:40:02.247 [2024-12-09 05:30:48.967165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:02.247 05:30:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.247 05:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:02.247 "name": "raid_bdev1", 00:40:02.247 "uuid": "434c1955-182e-4f2d-a4b5-d17e19af3af4", 00:40:02.247 "strip_size_kb": 0, 00:40:02.247 "state": "online", 00:40:02.247 "raid_level": "raid1", 00:40:02.247 "superblock": true, 00:40:02.247 "num_base_bdevs": 4, 00:40:02.247 "num_base_bdevs_discovered": 4, 00:40:02.247 "num_base_bdevs_operational": 4, 00:40:02.247 "base_bdevs_list": [ 00:40:02.247 { 00:40:02.247 "name": "BaseBdev1", 00:40:02.247 "uuid": "82cda1e8-4ba4-57a8-a76b-fa35faadabd6", 00:40:02.247 "is_configured": true, 00:40:02.247 "data_offset": 2048, 00:40:02.247 "data_size": 63488 00:40:02.247 }, 00:40:02.247 { 00:40:02.247 "name": "BaseBdev2", 00:40:02.247 "uuid": "9d931784-7757-5f89-b9f4-54b66e32d7aa", 00:40:02.247 "is_configured": true, 00:40:02.247 "data_offset": 2048, 00:40:02.247 "data_size": 63488 00:40:02.247 }, 00:40:02.247 { 00:40:02.247 "name": "BaseBdev3", 00:40:02.247 "uuid": "b95473d4-ec19-5fce-b691-fa59e8b238f5", 00:40:02.247 "is_configured": true, 00:40:02.247 "data_offset": 2048, 00:40:02.247 "data_size": 63488 00:40:02.247 }, 00:40:02.247 { 00:40:02.247 "name": "BaseBdev4", 00:40:02.247 "uuid": "5b4660c1-50aa-5513-b78c-fef1f06184aa", 00:40:02.247 "is_configured": true, 00:40:02.247 "data_offset": 2048, 00:40:02.247 "data_size": 63488 00:40:02.247 } 00:40:02.247 ] 00:40:02.247 }' 00:40:02.247 05:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:02.247 05:30:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:02.815 05:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:40:02.815 05:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:40:02.815 [2024-12-09 05:30:49.632865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:03.791 "name": "raid_bdev1", 00:40:03.791 "uuid": "434c1955-182e-4f2d-a4b5-d17e19af3af4", 00:40:03.791 "strip_size_kb": 0, 00:40:03.791 "state": "online", 00:40:03.791 "raid_level": "raid1", 00:40:03.791 "superblock": true, 00:40:03.791 "num_base_bdevs": 4, 00:40:03.791 "num_base_bdevs_discovered": 4, 00:40:03.791 "num_base_bdevs_operational": 4, 00:40:03.791 "base_bdevs_list": [ 00:40:03.791 { 00:40:03.791 "name": "BaseBdev1", 00:40:03.791 "uuid": "82cda1e8-4ba4-57a8-a76b-fa35faadabd6", 00:40:03.791 "is_configured": true, 00:40:03.791 "data_offset": 2048, 00:40:03.791 "data_size": 63488 00:40:03.791 }, 00:40:03.791 { 00:40:03.791 "name": "BaseBdev2", 00:40:03.791 "uuid": "9d931784-7757-5f89-b9f4-54b66e32d7aa", 00:40:03.791 "is_configured": true, 00:40:03.791 "data_offset": 2048, 00:40:03.791 "data_size": 63488 00:40:03.791 }, 00:40:03.791 { 00:40:03.791 "name": "BaseBdev3", 00:40:03.791 "uuid": "b95473d4-ec19-5fce-b691-fa59e8b238f5", 00:40:03.791 "is_configured": true, 00:40:03.791 "data_offset": 2048, 00:40:03.791 "data_size": 63488 00:40:03.791 }, 00:40:03.791 { 00:40:03.791 "name": "BaseBdev4", 00:40:03.791 "uuid": "5b4660c1-50aa-5513-b78c-fef1f06184aa", 00:40:03.791 "is_configured": true, 00:40:03.791 "data_offset": 2048, 00:40:03.791 "data_size": 63488 00:40:03.791 } 00:40:03.791 ] 00:40:03.791 }' 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:03.791 05:30:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:04.357 [2024-12-09 05:30:51.084327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:04.357 [2024-12-09 05:30:51.084513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:04.357 [2024-12-09 05:30:51.088558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:04.357 [2024-12-09 05:30:51.088860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:04.357 [2024-12-09 05:30:51.089153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:04.357 { 00:40:04.357 "results": [ 00:40:04.357 { 00:40:04.357 "job": "raid_bdev1", 00:40:04.357 "core_mask": "0x1", 00:40:04.357 "workload": "randrw", 00:40:04.357 "percentage": 50, 00:40:04.357 "status": "finished", 00:40:04.357 "queue_depth": 1, 00:40:04.357 "io_size": 131072, 00:40:04.357 "runtime": 1.44942, 00:40:04.357 "iops": 6541.237184528984, 00:40:04.357 "mibps": 817.654648066123, 00:40:04.357 "io_failed": 0, 00:40:04.357 "io_timeout": 0, 00:40:04.357 "avg_latency_us": 148.42387396803176, 00:40:04.357 "min_latency_us": 40.02909090909091, 00:40:04.357 "max_latency_us": 2055.447272727273 00:40:04.357 } 00:40:04.357 ], 00:40:04.357 "core_count": 1 00:40:04.357 } 00:40:04.357 [2024-12-09 05:30:51.089324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75276 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75276 ']' 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75276 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75276 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75276' 00:40:04.357 killing process with pid 75276 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75276 00:40:04.357 [2024-12-09 05:30:51.138489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:04.357 05:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75276 00:40:04.616 [2024-12-09 05:30:51.457605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IEkaBPhqwg 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:40:05.993 00:40:05.993 real 0m5.242s 00:40:05.993 user 0m6.376s 00:40:05.993 sys 0m0.709s 00:40:05.993 ************************************ 00:40:05.993 END TEST raid_read_error_test 00:40:05.993 ************************************ 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:05.993 05:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:05.993 05:30:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:40:05.993 05:30:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:05.993 05:30:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.993 05:30:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:05.993 ************************************ 00:40:05.993 START TEST raid_write_error_test 00:40:05.993 ************************************ 00:40:05.993 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:40:05.993 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:40:05.993 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:40:05.993 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:40:05.993 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:40:05.993 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2t8803be7E 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75427 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75427 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75427 ']' 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:05.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:05.994 05:30:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:05.994 [2024-12-09 05:30:52.942612] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:05.994 [2024-12-09 05:30:52.943160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75427 ] 00:40:06.251 [2024-12-09 05:30:53.135006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.509 [2024-12-09 05:30:53.289992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.768 [2024-12-09 05:30:53.525676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:06.768 [2024-12-09 05:30:53.525732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:07.026 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.026 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:40:07.026 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:07.026 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:07.026 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.026 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.026 BaseBdev1_malloc 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.027 true 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.027 [2024-12-09 05:30:53.939350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:40:07.027 [2024-12-09 05:30:53.939424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:07.027 [2024-12-09 05:30:53.939454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:40:07.027 [2024-12-09 05:30:53.939473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:07.027 [2024-12-09 05:30:53.942503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:07.027 BaseBdev1 00:40:07.027 [2024-12-09 05:30:53.942687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.027 BaseBdev2_malloc 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.027 true 00:40:07.027 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:40:07.285 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 [2024-12-09 05:30:54.003580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:40:07.285 [2024-12-09 05:30:54.003803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:07.285 [2024-12-09 05:30:54.003873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:40:07.285 [2024-12-09 05:30:54.004010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:07.285 [2024-12-09 05:30:54.006963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:07.285 [2024-12-09 05:30:54.007012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:07.285 BaseBdev2 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 BaseBdev3_malloc 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 true 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 [2024-12-09 05:30:54.079455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:40:07.285 [2024-12-09 05:30:54.079527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:07.285 [2024-12-09 05:30:54.079556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:40:07.285 [2024-12-09 05:30:54.079574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:07.285 [2024-12-09 05:30:54.082543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:07.285 [2024-12-09 05:30:54.082595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:40:07.285 BaseBdev3 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 BaseBdev4_malloc 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 true 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 [2024-12-09 05:30:54.145946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:40:07.285 [2024-12-09 05:30:54.146017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:07.285 [2024-12-09 05:30:54.146046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:07.285 [2024-12-09 05:30:54.146063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:07.285 [2024-12-09 05:30:54.149282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:07.285 [2024-12-09 05:30:54.149344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:40:07.285 BaseBdev4 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 [2024-12-09 05:30:54.154107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:07.285 [2024-12-09 05:30:54.157202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:07.285 [2024-12-09 05:30:54.157429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:07.285 [2024-12-09 05:30:54.157587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:40:07.285 [2024-12-09 05:30:54.158004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:40:07.285 [2024-12-09 05:30:54.158050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:40:07.285 [2024-12-09 05:30:54.158382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:40:07.285 [2024-12-09 05:30:54.158607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:40:07.285 [2024-12-09 05:30:54.158624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:40:07.285 [2024-12-09 05:30:54.158911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.285 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:07.285 "name": "raid_bdev1", 00:40:07.285 "uuid": "7d4c3b68-a634-49ff-ba08-6a3df62baae5", 00:40:07.285 "strip_size_kb": 0, 00:40:07.285 "state": "online", 00:40:07.285 "raid_level": "raid1", 00:40:07.285 "superblock": true, 00:40:07.285 "num_base_bdevs": 4, 00:40:07.285 "num_base_bdevs_discovered": 4, 00:40:07.285 "num_base_bdevs_operational": 4, 00:40:07.285 "base_bdevs_list": [ 00:40:07.285 { 00:40:07.285 "name": "BaseBdev1", 00:40:07.285 "uuid": "d813a16b-7533-5c11-b7fb-c5780fd086ff", 00:40:07.285 "is_configured": true, 00:40:07.285 "data_offset": 2048, 00:40:07.285 "data_size": 63488 00:40:07.285 }, 00:40:07.285 { 00:40:07.285 "name": "BaseBdev2", 00:40:07.285 "uuid": "d5a03ff6-5b88-5a78-a299-4a323ac9aeda", 00:40:07.285 "is_configured": true, 00:40:07.286 "data_offset": 2048, 00:40:07.286 "data_size": 63488 00:40:07.286 }, 00:40:07.286 { 00:40:07.286 "name": "BaseBdev3", 00:40:07.286 "uuid": "52a8f62b-eb5a-5732-ae46-eb112f1c47d9", 00:40:07.286 "is_configured": true, 00:40:07.286 "data_offset": 2048, 00:40:07.286 "data_size": 63488 00:40:07.286 }, 00:40:07.286 { 00:40:07.286 "name": "BaseBdev4", 00:40:07.286 "uuid": "d442e49e-6d33-56aa-b3f4-36b33d4e10c8", 00:40:07.286 "is_configured": true, 00:40:07.286 "data_offset": 2048, 00:40:07.286 "data_size": 63488 00:40:07.286 } 00:40:07.286 ] 00:40:07.286 }' 00:40:07.286 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:07.286 05:30:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:07.851 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:40:07.851 05:30:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:40:08.122 [2024-12-09 05:30:54.824085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:09.078 [2024-12-09 05:30:55.701491] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:40:09.078 [2024-12-09 05:30:55.701580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:09.078 [2024-12-09 05:30:55.701914] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:09.078 "name": "raid_bdev1", 00:40:09.078 "uuid": "7d4c3b68-a634-49ff-ba08-6a3df62baae5", 00:40:09.078 "strip_size_kb": 0, 00:40:09.078 "state": "online", 00:40:09.078 "raid_level": "raid1", 00:40:09.078 "superblock": true, 00:40:09.078 "num_base_bdevs": 4, 00:40:09.078 "num_base_bdevs_discovered": 3, 00:40:09.078 "num_base_bdevs_operational": 3, 00:40:09.078 "base_bdevs_list": [ 00:40:09.078 { 00:40:09.078 "name": null, 00:40:09.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:09.078 "is_configured": false, 00:40:09.078 "data_offset": 0, 00:40:09.078 "data_size": 63488 00:40:09.078 }, 00:40:09.078 { 00:40:09.078 "name": "BaseBdev2", 00:40:09.078 "uuid": "d5a03ff6-5b88-5a78-a299-4a323ac9aeda", 00:40:09.078 "is_configured": true, 00:40:09.078 "data_offset": 2048, 00:40:09.078 "data_size": 63488 00:40:09.078 }, 00:40:09.078 { 00:40:09.078 "name": "BaseBdev3", 00:40:09.078 "uuid": "52a8f62b-eb5a-5732-ae46-eb112f1c47d9", 00:40:09.078 "is_configured": true, 00:40:09.078 "data_offset": 2048, 00:40:09.078 "data_size": 63488 00:40:09.078 }, 00:40:09.078 { 00:40:09.078 "name": "BaseBdev4", 00:40:09.078 "uuid": "d442e49e-6d33-56aa-b3f4-36b33d4e10c8", 00:40:09.078 "is_configured": true, 00:40:09.078 "data_offset": 2048, 00:40:09.078 "data_size": 63488 00:40:09.078 } 00:40:09.078 ] 00:40:09.078 }' 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:09.078 05:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:09.337 [2024-12-09 05:30:56.238465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:09.337 [2024-12-09 05:30:56.238502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:09.337 { 00:40:09.337 "results": [ 00:40:09.337 { 00:40:09.337 "job": "raid_bdev1", 00:40:09.337 "core_mask": "0x1", 00:40:09.337 "workload": "randrw", 00:40:09.337 "percentage": 50, 00:40:09.337 "status": "finished", 00:40:09.337 "queue_depth": 1, 00:40:09.337 "io_size": 131072, 00:40:09.337 "runtime": 1.411665, 00:40:09.337 "iops": 7136.962381301512, 00:40:09.337 "mibps": 892.120297662689, 00:40:09.337 "io_failed": 0, 00:40:09.337 "io_timeout": 0, 00:40:09.337 "avg_latency_us": 135.55198267538913, 00:40:09.337 "min_latency_us": 40.72727272727273, 00:40:09.337 "max_latency_us": 2204.3927272727274 00:40:09.337 } 00:40:09.337 ], 00:40:09.337 "core_count": 1 00:40:09.337 } 00:40:09.337 [2024-12-09 05:30:56.242135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:09.337 [2024-12-09 05:30:56.242198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:09.337 [2024-12-09 05:30:56.242356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:09.337 [2024-12-09 05:30:56.242377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75427 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75427 ']' 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75427 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75427 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75427' 00:40:09.337 killing process with pid 75427 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75427 00:40:09.337 [2024-12-09 05:30:56.275296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:09.337 05:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75427 00:40:09.904 [2024-12-09 05:30:56.607657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2t8803be7E 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:40:11.283 ************************************ 00:40:11.283 END TEST raid_write_error_test 00:40:11.283 ************************************ 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:40:11.283 00:40:11.283 real 0m5.180s 00:40:11.283 user 0m6.198s 00:40:11.283 sys 0m0.700s 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.283 05:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.283 05:30:58 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:40:11.283 05:30:58 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:40:11.283 05:30:58 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:40:11.283 05:30:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:11.283 05:30:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.283 05:30:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:11.283 ************************************ 00:40:11.283 START TEST raid_rebuild_test 00:40:11.283 ************************************ 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:40:11.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75572 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75572 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75572 ']' 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:11.283 05:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.283 [2024-12-09 05:30:58.170757] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:11.283 [2024-12-09 05:30:58.171166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75572 ] 00:40:11.283 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:11.283 Zero copy mechanism will not be used. 00:40:11.543 [2024-12-09 05:30:58.364734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.543 [2024-12-09 05:30:58.512285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.802 [2024-12-09 05:30:58.734398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:11.802 [2024-12-09 05:30:58.734463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.370 BaseBdev1_malloc 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.370 [2024-12-09 05:30:59.222038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:12.370 [2024-12-09 05:30:59.222381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:12.370 [2024-12-09 05:30:59.222457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:12.370 [2024-12-09 05:30:59.222707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:12.370 [2024-12-09 05:30:59.225825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:12.370 [2024-12-09 05:30:59.226066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:12.370 BaseBdev1 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.370 BaseBdev2_malloc 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.370 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.370 [2024-12-09 05:30:59.280046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:12.370 [2024-12-09 05:30:59.280339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:12.370 [2024-12-09 05:30:59.280413] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:12.371 [2024-12-09 05:30:59.280539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:12.371 [2024-12-09 05:30:59.283873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:12.371 BaseBdev2 00:40:12.371 [2024-12-09 05:30:59.284113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:12.371 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.371 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:40:12.371 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.371 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.630 spare_malloc 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.630 spare_delay 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.630 [2024-12-09 05:30:59.357252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:12.630 [2024-12-09 05:30:59.357335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:12.630 [2024-12-09 05:30:59.357364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:40:12.630 [2024-12-09 05:30:59.357382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:12.630 [2024-12-09 05:30:59.360462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:12.630 [2024-12-09 05:30:59.360528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:12.630 spare 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.630 [2024-12-09 05:30:59.365390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:12.630 [2024-12-09 05:30:59.368063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:12.630 [2024-12-09 05:30:59.368196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:12.630 [2024-12-09 05:30:59.368234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:40:12.630 [2024-12-09 05:30:59.368572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:12.630 [2024-12-09 05:30:59.368794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:12.630 [2024-12-09 05:30:59.368835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:12.630 [2024-12-09 05:30:59.369055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:12.630 "name": "raid_bdev1", 00:40:12.630 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:12.630 "strip_size_kb": 0, 00:40:12.630 "state": "online", 00:40:12.630 "raid_level": "raid1", 00:40:12.630 "superblock": false, 00:40:12.630 "num_base_bdevs": 2, 00:40:12.630 "num_base_bdevs_discovered": 2, 00:40:12.630 "num_base_bdevs_operational": 2, 00:40:12.630 "base_bdevs_list": [ 00:40:12.630 { 00:40:12.630 "name": "BaseBdev1", 00:40:12.630 "uuid": "48d93262-11ec-5eb6-94ca-21291e5290f1", 00:40:12.630 "is_configured": true, 00:40:12.630 "data_offset": 0, 00:40:12.630 "data_size": 65536 00:40:12.630 }, 00:40:12.630 { 00:40:12.630 "name": "BaseBdev2", 00:40:12.630 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:12.630 "is_configured": true, 00:40:12.630 "data_offset": 0, 00:40:12.630 "data_size": 65536 00:40:12.630 } 00:40:12.630 ] 00:40:12.630 }' 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:12.630 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.199 [2024-12-09 05:30:59.881968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:13.199 05:30:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:13.458 [2024-12-09 05:31:00.261790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:40:13.458 /dev/nbd0 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:13.458 1+0 records in 00:40:13.458 1+0 records out 00:40:13.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300517 s, 13.6 MB/s 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:40:13.458 05:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:40:20.023 65536+0 records in 00:40:20.023 65536+0 records out 00:40:20.023 33554432 bytes (34 MB, 32 MiB) copied, 6.48626 s, 5.2 MB/s 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:20.023 05:31:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:20.282 [2024-12-09 05:31:07.180908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:20.282 [2024-12-09 05:31:07.196992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:20.282 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.540 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:20.540 "name": "raid_bdev1", 00:40:20.540 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:20.541 "strip_size_kb": 0, 00:40:20.541 "state": "online", 00:40:20.541 "raid_level": "raid1", 00:40:20.541 "superblock": false, 00:40:20.541 "num_base_bdevs": 2, 00:40:20.541 "num_base_bdevs_discovered": 1, 00:40:20.541 "num_base_bdevs_operational": 1, 00:40:20.541 "base_bdevs_list": [ 00:40:20.541 { 00:40:20.541 "name": null, 00:40:20.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:20.541 "is_configured": false, 00:40:20.541 "data_offset": 0, 00:40:20.541 "data_size": 65536 00:40:20.541 }, 00:40:20.541 { 00:40:20.541 "name": "BaseBdev2", 00:40:20.541 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:20.541 "is_configured": true, 00:40:20.541 "data_offset": 0, 00:40:20.541 "data_size": 65536 00:40:20.541 } 00:40:20.541 ] 00:40:20.541 }' 00:40:20.541 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:20.541 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:20.799 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:20.799 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.799 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:20.799 [2024-12-09 05:31:07.749255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:20.800 [2024-12-09 05:31:07.765985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:40:20.800 05:31:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.800 05:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:40:20.800 [2024-12-09 05:31:07.768902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:22.177 "name": "raid_bdev1", 00:40:22.177 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:22.177 "strip_size_kb": 0, 00:40:22.177 "state": "online", 00:40:22.177 "raid_level": "raid1", 00:40:22.177 "superblock": false, 00:40:22.177 "num_base_bdevs": 2, 00:40:22.177 "num_base_bdevs_discovered": 2, 00:40:22.177 "num_base_bdevs_operational": 2, 00:40:22.177 "process": { 00:40:22.177 "type": "rebuild", 00:40:22.177 "target": "spare", 00:40:22.177 "progress": { 00:40:22.177 "blocks": 20480, 00:40:22.177 "percent": 31 00:40:22.177 } 00:40:22.177 }, 00:40:22.177 "base_bdevs_list": [ 00:40:22.177 { 00:40:22.177 "name": "spare", 00:40:22.177 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:22.177 "is_configured": true, 00:40:22.177 "data_offset": 0, 00:40:22.177 "data_size": 65536 00:40:22.177 }, 00:40:22.177 { 00:40:22.177 "name": "BaseBdev2", 00:40:22.177 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:22.177 "is_configured": true, 00:40:22.177 "data_offset": 0, 00:40:22.177 "data_size": 65536 00:40:22.177 } 00:40:22.177 ] 00:40:22.177 }' 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.177 05:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:22.177 [2024-12-09 05:31:08.942409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:22.177 [2024-12-09 05:31:08.979141] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:22.177 [2024-12-09 05:31:08.979308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:22.177 [2024-12-09 05:31:08.979332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:22.177 [2024-12-09 05:31:08.979350] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:22.177 "name": "raid_bdev1", 00:40:22.177 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:22.177 "strip_size_kb": 0, 00:40:22.177 "state": "online", 00:40:22.177 "raid_level": "raid1", 00:40:22.177 "superblock": false, 00:40:22.177 "num_base_bdevs": 2, 00:40:22.177 "num_base_bdevs_discovered": 1, 00:40:22.177 "num_base_bdevs_operational": 1, 00:40:22.177 "base_bdevs_list": [ 00:40:22.177 { 00:40:22.177 "name": null, 00:40:22.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.177 "is_configured": false, 00:40:22.177 "data_offset": 0, 00:40:22.177 "data_size": 65536 00:40:22.177 }, 00:40:22.177 { 00:40:22.177 "name": "BaseBdev2", 00:40:22.177 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:22.177 "is_configured": true, 00:40:22.177 "data_offset": 0, 00:40:22.177 "data_size": 65536 00:40:22.177 } 00:40:22.177 ] 00:40:22.177 }' 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:22.177 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:22.746 "name": "raid_bdev1", 00:40:22.746 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:22.746 "strip_size_kb": 0, 00:40:22.746 "state": "online", 00:40:22.746 "raid_level": "raid1", 00:40:22.746 "superblock": false, 00:40:22.746 "num_base_bdevs": 2, 00:40:22.746 "num_base_bdevs_discovered": 1, 00:40:22.746 "num_base_bdevs_operational": 1, 00:40:22.746 "base_bdevs_list": [ 00:40:22.746 { 00:40:22.746 "name": null, 00:40:22.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.746 "is_configured": false, 00:40:22.746 "data_offset": 0, 00:40:22.746 "data_size": 65536 00:40:22.746 }, 00:40:22.746 { 00:40:22.746 "name": "BaseBdev2", 00:40:22.746 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:22.746 "is_configured": true, 00:40:22.746 "data_offset": 0, 00:40:22.746 "data_size": 65536 00:40:22.746 } 00:40:22.746 ] 00:40:22.746 }' 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:22.746 [2024-12-09 05:31:09.690626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:22.746 [2024-12-09 05:31:09.707109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.746 05:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:22.746 [2024-12-09 05:31:09.710154] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:24.124 "name": "raid_bdev1", 00:40:24.124 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:24.124 "strip_size_kb": 0, 00:40:24.124 "state": "online", 00:40:24.124 "raid_level": "raid1", 00:40:24.124 "superblock": false, 00:40:24.124 "num_base_bdevs": 2, 00:40:24.124 "num_base_bdevs_discovered": 2, 00:40:24.124 "num_base_bdevs_operational": 2, 00:40:24.124 "process": { 00:40:24.124 "type": "rebuild", 00:40:24.124 "target": "spare", 00:40:24.124 "progress": { 00:40:24.124 "blocks": 20480, 00:40:24.124 "percent": 31 00:40:24.124 } 00:40:24.124 }, 00:40:24.124 "base_bdevs_list": [ 00:40:24.124 { 00:40:24.124 "name": "spare", 00:40:24.124 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:24.124 "is_configured": true, 00:40:24.124 "data_offset": 0, 00:40:24.124 "data_size": 65536 00:40:24.124 }, 00:40:24.124 { 00:40:24.124 "name": "BaseBdev2", 00:40:24.124 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:24.124 "is_configured": true, 00:40:24.124 "data_offset": 0, 00:40:24.124 "data_size": 65536 00:40:24.124 } 00:40:24.124 ] 00:40:24.124 }' 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:24.124 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:24.125 "name": "raid_bdev1", 00:40:24.125 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:24.125 "strip_size_kb": 0, 00:40:24.125 "state": "online", 00:40:24.125 "raid_level": "raid1", 00:40:24.125 "superblock": false, 00:40:24.125 "num_base_bdevs": 2, 00:40:24.125 "num_base_bdevs_discovered": 2, 00:40:24.125 "num_base_bdevs_operational": 2, 00:40:24.125 "process": { 00:40:24.125 "type": "rebuild", 00:40:24.125 "target": "spare", 00:40:24.125 "progress": { 00:40:24.125 "blocks": 22528, 00:40:24.125 "percent": 34 00:40:24.125 } 00:40:24.125 }, 00:40:24.125 "base_bdevs_list": [ 00:40:24.125 { 00:40:24.125 "name": "spare", 00:40:24.125 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:24.125 "is_configured": true, 00:40:24.125 "data_offset": 0, 00:40:24.125 "data_size": 65536 00:40:24.125 }, 00:40:24.125 { 00:40:24.125 "name": "BaseBdev2", 00:40:24.125 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:24.125 "is_configured": true, 00:40:24.125 "data_offset": 0, 00:40:24.125 "data_size": 65536 00:40:24.125 } 00:40:24.125 ] 00:40:24.125 }' 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:24.125 05:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:24.125 05:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:24.125 05:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:25.505 "name": "raid_bdev1", 00:40:25.505 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:25.505 "strip_size_kb": 0, 00:40:25.505 "state": "online", 00:40:25.505 "raid_level": "raid1", 00:40:25.505 "superblock": false, 00:40:25.505 "num_base_bdevs": 2, 00:40:25.505 "num_base_bdevs_discovered": 2, 00:40:25.505 "num_base_bdevs_operational": 2, 00:40:25.505 "process": { 00:40:25.505 "type": "rebuild", 00:40:25.505 "target": "spare", 00:40:25.505 "progress": { 00:40:25.505 "blocks": 47104, 00:40:25.505 "percent": 71 00:40:25.505 } 00:40:25.505 }, 00:40:25.505 "base_bdevs_list": [ 00:40:25.505 { 00:40:25.505 "name": "spare", 00:40:25.505 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:25.505 "is_configured": true, 00:40:25.505 "data_offset": 0, 00:40:25.505 "data_size": 65536 00:40:25.505 }, 00:40:25.505 { 00:40:25.505 "name": "BaseBdev2", 00:40:25.505 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:25.505 "is_configured": true, 00:40:25.505 "data_offset": 0, 00:40:25.505 "data_size": 65536 00:40:25.505 } 00:40:25.505 ] 00:40:25.505 }' 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:25.505 05:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:26.071 [2024-12-09 05:31:12.938781] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:26.072 [2024-12-09 05:31:12.938905] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:26.072 [2024-12-09 05:31:12.938983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:26.330 "name": "raid_bdev1", 00:40:26.330 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:26.330 "strip_size_kb": 0, 00:40:26.330 "state": "online", 00:40:26.330 "raid_level": "raid1", 00:40:26.330 "superblock": false, 00:40:26.330 "num_base_bdevs": 2, 00:40:26.330 "num_base_bdevs_discovered": 2, 00:40:26.330 "num_base_bdevs_operational": 2, 00:40:26.330 "base_bdevs_list": [ 00:40:26.330 { 00:40:26.330 "name": "spare", 00:40:26.330 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:26.330 "is_configured": true, 00:40:26.330 "data_offset": 0, 00:40:26.330 "data_size": 65536 00:40:26.330 }, 00:40:26.330 { 00:40:26.330 "name": "BaseBdev2", 00:40:26.330 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:26.330 "is_configured": true, 00:40:26.330 "data_offset": 0, 00:40:26.330 "data_size": 65536 00:40:26.330 } 00:40:26.330 ] 00:40:26.330 }' 00:40:26.330 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:26.588 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:26.588 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:26.588 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:40:26.588 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:40:26.588 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:26.589 "name": "raid_bdev1", 00:40:26.589 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:26.589 "strip_size_kb": 0, 00:40:26.589 "state": "online", 00:40:26.589 "raid_level": "raid1", 00:40:26.589 "superblock": false, 00:40:26.589 "num_base_bdevs": 2, 00:40:26.589 "num_base_bdevs_discovered": 2, 00:40:26.589 "num_base_bdevs_operational": 2, 00:40:26.589 "base_bdevs_list": [ 00:40:26.589 { 00:40:26.589 "name": "spare", 00:40:26.589 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:26.589 "is_configured": true, 00:40:26.589 "data_offset": 0, 00:40:26.589 "data_size": 65536 00:40:26.589 }, 00:40:26.589 { 00:40:26.589 "name": "BaseBdev2", 00:40:26.589 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:26.589 "is_configured": true, 00:40:26.589 "data_offset": 0, 00:40:26.589 "data_size": 65536 00:40:26.589 } 00:40:26.589 ] 00:40:26.589 }' 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:26.589 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.847 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:26.847 "name": "raid_bdev1", 00:40:26.847 "uuid": "f5cdd774-835d-4188-8d8c-4700ddde0dd1", 00:40:26.847 "strip_size_kb": 0, 00:40:26.847 "state": "online", 00:40:26.847 "raid_level": "raid1", 00:40:26.847 "superblock": false, 00:40:26.847 "num_base_bdevs": 2, 00:40:26.847 "num_base_bdevs_discovered": 2, 00:40:26.847 "num_base_bdevs_operational": 2, 00:40:26.847 "base_bdevs_list": [ 00:40:26.847 { 00:40:26.847 "name": "spare", 00:40:26.847 "uuid": "85886d9c-4b33-531f-b895-cbcf4e191957", 00:40:26.847 "is_configured": true, 00:40:26.847 "data_offset": 0, 00:40:26.847 "data_size": 65536 00:40:26.847 }, 00:40:26.847 { 00:40:26.847 "name": "BaseBdev2", 00:40:26.847 "uuid": "d1bf62fc-9cc1-5adf-98dc-5cd144555cdb", 00:40:26.847 "is_configured": true, 00:40:26.847 "data_offset": 0, 00:40:26.847 "data_size": 65536 00:40:26.847 } 00:40:26.847 ] 00:40:26.847 }' 00:40:26.847 05:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:26.847 05:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:27.104 [2024-12-09 05:31:14.068397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:27.104 [2024-12-09 05:31:14.068448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:27.104 [2024-12-09 05:31:14.068568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:27.104 [2024-12-09 05:31:14.068720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:27.104 [2024-12-09 05:31:14.068768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.104 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:27.361 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:27.619 /dev/nbd0 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:27.619 1+0 records in 00:40:27.619 1+0 records out 00:40:27.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404218 s, 10.1 MB/s 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:27.619 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:27.620 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:40:27.877 /dev/nbd1 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:28.135 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:28.136 1+0 records in 00:40:28.136 1+0 records out 00:40:28.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454278 s, 9.0 MB/s 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:28.136 05:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:28.136 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:28.700 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75572 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75572 ']' 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75572 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75572 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:28.958 killing process with pid 75572 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75572' 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75572 00:40:28.958 Received shutdown signal, test time was about 60.000000 seconds 00:40:28.958 00:40:28.958 Latency(us) 00:40:28.958 [2024-12-09T05:31:15.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.958 [2024-12-09T05:31:15.930Z] =================================================================================================================== 00:40:28.958 [2024-12-09T05:31:15.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:28.958 [2024-12-09 05:31:15.730556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:28.958 05:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75572 00:40:29.216 [2024-12-09 05:31:16.003950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:40:30.608 00:40:30.608 real 0m19.141s 00:40:30.608 user 0m21.598s 00:40:30.608 sys 0m3.981s 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:30.608 ************************************ 00:40:30.608 END TEST raid_rebuild_test 00:40:30.608 ************************************ 00:40:30.608 05:31:17 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:40:30.608 05:31:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:30.608 05:31:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.608 05:31:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:30.608 ************************************ 00:40:30.608 START TEST raid_rebuild_test_sb 00:40:30.608 ************************************ 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76029 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76029 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76029 ']' 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:30.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:30.608 05:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.608 [2024-12-09 05:31:17.364584] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:30.608 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:30.608 Zero copy mechanism will not be used. 00:40:30.608 [2024-12-09 05:31:17.365352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76029 ] 00:40:30.608 [2024-12-09 05:31:17.552722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.867 [2024-12-09 05:31:17.676991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.126 [2024-12-09 05:31:17.875414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:31.126 [2024-12-09 05:31:17.875506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:31.385 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:31.385 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:40:31.385 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:31.385 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:31.385 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.385 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 BaseBdev1_malloc 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 [2024-12-09 05:31:18.370402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:31.644 [2024-12-09 05:31:18.370493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:31.644 [2024-12-09 05:31:18.370526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:31.644 [2024-12-09 05:31:18.370547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:31.644 [2024-12-09 05:31:18.373359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:31.644 [2024-12-09 05:31:18.373413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:31.644 BaseBdev1 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 BaseBdev2_malloc 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 [2024-12-09 05:31:18.416567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:31.644 [2024-12-09 05:31:18.416669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:31.644 [2024-12-09 05:31:18.416703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:31.644 [2024-12-09 05:31:18.416730] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:31.644 [2024-12-09 05:31:18.419482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:31.644 [2024-12-09 05:31:18.419524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:31.644 BaseBdev2 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 spare_malloc 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 spare_delay 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 [2024-12-09 05:31:18.476858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:31.644 [2024-12-09 05:31:18.476980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:31.644 [2024-12-09 05:31:18.477013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:40:31.644 [2024-12-09 05:31:18.477032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:31.644 [2024-12-09 05:31:18.479970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:31.644 [2024-12-09 05:31:18.480022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:31.644 spare 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 [2024-12-09 05:31:18.484998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:31.644 [2024-12-09 05:31:18.487439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:31.644 [2024-12-09 05:31:18.487718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:31.644 [2024-12-09 05:31:18.487782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:40:31.644 [2024-12-09 05:31:18.488103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:31.644 [2024-12-09 05:31:18.488371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:31.644 [2024-12-09 05:31:18.488395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:31.644 [2024-12-09 05:31:18.488589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.644 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:31.644 "name": "raid_bdev1", 00:40:31.645 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:31.645 "strip_size_kb": 0, 00:40:31.645 "state": "online", 00:40:31.645 "raid_level": "raid1", 00:40:31.645 "superblock": true, 00:40:31.645 "num_base_bdevs": 2, 00:40:31.645 "num_base_bdevs_discovered": 2, 00:40:31.645 "num_base_bdevs_operational": 2, 00:40:31.645 "base_bdevs_list": [ 00:40:31.645 { 00:40:31.645 "name": "BaseBdev1", 00:40:31.645 "uuid": "34c0d38d-4175-5eb7-b053-2e0fdb2ac085", 00:40:31.645 "is_configured": true, 00:40:31.645 "data_offset": 2048, 00:40:31.645 "data_size": 63488 00:40:31.645 }, 00:40:31.645 { 00:40:31.645 "name": "BaseBdev2", 00:40:31.645 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:31.645 "is_configured": true, 00:40:31.645 "data_offset": 2048, 00:40:31.645 "data_size": 63488 00:40:31.645 } 00:40:31.645 ] 00:40:31.645 }' 00:40:31.645 05:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:31.645 05:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:32.212 [2024-12-09 05:31:19.025656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:32.212 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:32.471 [2024-12-09 05:31:19.441500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:40:32.730 /dev/nbd0 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:32.730 1+0 records in 00:40:32.730 1+0 records out 00:40:32.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037251 s, 11.0 MB/s 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:40:32.730 05:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:40:39.291 63488+0 records in 00:40:39.291 63488+0 records out 00:40:39.291 32505856 bytes (33 MB, 31 MiB) copied, 6.67546 s, 4.9 MB/s 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:39.291 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:39.548 [2024-12-09 05:31:26.466135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:39.548 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:39.549 [2024-12-09 05:31:26.498243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:39.549 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.806 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:39.806 "name": "raid_bdev1", 00:40:39.806 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:39.806 "strip_size_kb": 0, 00:40:39.806 "state": "online", 00:40:39.806 "raid_level": "raid1", 00:40:39.806 "superblock": true, 00:40:39.806 "num_base_bdevs": 2, 00:40:39.806 "num_base_bdevs_discovered": 1, 00:40:39.806 "num_base_bdevs_operational": 1, 00:40:39.806 "base_bdevs_list": [ 00:40:39.806 { 00:40:39.806 "name": null, 00:40:39.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:39.806 "is_configured": false, 00:40:39.806 "data_offset": 0, 00:40:39.806 "data_size": 63488 00:40:39.806 }, 00:40:39.806 { 00:40:39.806 "name": "BaseBdev2", 00:40:39.806 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:39.806 "is_configured": true, 00:40:39.806 "data_offset": 2048, 00:40:39.806 "data_size": 63488 00:40:39.806 } 00:40:39.806 ] 00:40:39.806 }' 00:40:39.806 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:39.806 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:40.064 05:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:40.064 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.064 05:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:40.064 [2024-12-09 05:31:26.994462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:40.064 [2024-12-09 05:31:27.012323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:40:40.064 05:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.064 05:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:40:40.064 [2024-12-09 05:31:27.015159] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:41.438 "name": "raid_bdev1", 00:40:41.438 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:41.438 "strip_size_kb": 0, 00:40:41.438 "state": "online", 00:40:41.438 "raid_level": "raid1", 00:40:41.438 "superblock": true, 00:40:41.438 "num_base_bdevs": 2, 00:40:41.438 "num_base_bdevs_discovered": 2, 00:40:41.438 "num_base_bdevs_operational": 2, 00:40:41.438 "process": { 00:40:41.438 "type": "rebuild", 00:40:41.438 "target": "spare", 00:40:41.438 "progress": { 00:40:41.438 "blocks": 20480, 00:40:41.438 "percent": 32 00:40:41.438 } 00:40:41.438 }, 00:40:41.438 "base_bdevs_list": [ 00:40:41.438 { 00:40:41.438 "name": "spare", 00:40:41.438 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:41.438 "is_configured": true, 00:40:41.438 "data_offset": 2048, 00:40:41.438 "data_size": 63488 00:40:41.438 }, 00:40:41.438 { 00:40:41.438 "name": "BaseBdev2", 00:40:41.438 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:41.438 "is_configured": true, 00:40:41.438 "data_offset": 2048, 00:40:41.438 "data_size": 63488 00:40:41.438 } 00:40:41.438 ] 00:40:41.438 }' 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:41.438 [2024-12-09 05:31:28.176607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:41.438 [2024-12-09 05:31:28.224332] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:41.438 [2024-12-09 05:31:28.224423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:41.438 [2024-12-09 05:31:28.224445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:41.438 [2024-12-09 05:31:28.224464] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:41.438 "name": "raid_bdev1", 00:40:41.438 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:41.438 "strip_size_kb": 0, 00:40:41.438 "state": "online", 00:40:41.438 "raid_level": "raid1", 00:40:41.438 "superblock": true, 00:40:41.438 "num_base_bdevs": 2, 00:40:41.438 "num_base_bdevs_discovered": 1, 00:40:41.438 "num_base_bdevs_operational": 1, 00:40:41.438 "base_bdevs_list": [ 00:40:41.438 { 00:40:41.438 "name": null, 00:40:41.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.438 "is_configured": false, 00:40:41.438 "data_offset": 0, 00:40:41.438 "data_size": 63488 00:40:41.438 }, 00:40:41.438 { 00:40:41.438 "name": "BaseBdev2", 00:40:41.438 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:41.438 "is_configured": true, 00:40:41.438 "data_offset": 2048, 00:40:41.438 "data_size": 63488 00:40:41.438 } 00:40:41.438 ] 00:40:41.438 }' 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:41.438 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:42.004 "name": "raid_bdev1", 00:40:42.004 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:42.004 "strip_size_kb": 0, 00:40:42.004 "state": "online", 00:40:42.004 "raid_level": "raid1", 00:40:42.004 "superblock": true, 00:40:42.004 "num_base_bdevs": 2, 00:40:42.004 "num_base_bdevs_discovered": 1, 00:40:42.004 "num_base_bdevs_operational": 1, 00:40:42.004 "base_bdevs_list": [ 00:40:42.004 { 00:40:42.004 "name": null, 00:40:42.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:42.004 "is_configured": false, 00:40:42.004 "data_offset": 0, 00:40:42.004 "data_size": 63488 00:40:42.004 }, 00:40:42.004 { 00:40:42.004 "name": "BaseBdev2", 00:40:42.004 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:42.004 "is_configured": true, 00:40:42.004 "data_offset": 2048, 00:40:42.004 "data_size": 63488 00:40:42.004 } 00:40:42.004 ] 00:40:42.004 }' 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.004 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:42.261 [2024-12-09 05:31:28.976234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:42.261 [2024-12-09 05:31:28.992593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:40:42.261 05:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.261 05:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:42.261 [2024-12-09 05:31:28.995416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:43.195 05:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:43.195 05:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:43.195 05:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:43.195 05:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:43.195 05:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:43.195 05:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.195 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:43.195 05:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.195 05:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:43.195 05:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.195 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:43.195 "name": "raid_bdev1", 00:40:43.196 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:43.196 "strip_size_kb": 0, 00:40:43.196 "state": "online", 00:40:43.196 "raid_level": "raid1", 00:40:43.196 "superblock": true, 00:40:43.196 "num_base_bdevs": 2, 00:40:43.196 "num_base_bdevs_discovered": 2, 00:40:43.196 "num_base_bdevs_operational": 2, 00:40:43.196 "process": { 00:40:43.196 "type": "rebuild", 00:40:43.196 "target": "spare", 00:40:43.196 "progress": { 00:40:43.196 "blocks": 20480, 00:40:43.196 "percent": 32 00:40:43.196 } 00:40:43.196 }, 00:40:43.196 "base_bdevs_list": [ 00:40:43.196 { 00:40:43.196 "name": "spare", 00:40:43.196 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:43.196 "is_configured": true, 00:40:43.196 "data_offset": 2048, 00:40:43.196 "data_size": 63488 00:40:43.196 }, 00:40:43.196 { 00:40:43.196 "name": "BaseBdev2", 00:40:43.196 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:43.196 "is_configured": true, 00:40:43.196 "data_offset": 2048, 00:40:43.196 "data_size": 63488 00:40:43.196 } 00:40:43.196 ] 00:40:43.196 }' 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:40:43.196 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:43.196 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:43.455 "name": "raid_bdev1", 00:40:43.455 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:43.455 "strip_size_kb": 0, 00:40:43.455 "state": "online", 00:40:43.455 "raid_level": "raid1", 00:40:43.455 "superblock": true, 00:40:43.455 "num_base_bdevs": 2, 00:40:43.455 "num_base_bdevs_discovered": 2, 00:40:43.455 "num_base_bdevs_operational": 2, 00:40:43.455 "process": { 00:40:43.455 "type": "rebuild", 00:40:43.455 "target": "spare", 00:40:43.455 "progress": { 00:40:43.455 "blocks": 22528, 00:40:43.455 "percent": 35 00:40:43.455 } 00:40:43.455 }, 00:40:43.455 "base_bdevs_list": [ 00:40:43.455 { 00:40:43.455 "name": "spare", 00:40:43.455 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:43.455 "is_configured": true, 00:40:43.455 "data_offset": 2048, 00:40:43.455 "data_size": 63488 00:40:43.455 }, 00:40:43.455 { 00:40:43.455 "name": "BaseBdev2", 00:40:43.455 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:43.455 "is_configured": true, 00:40:43.455 "data_offset": 2048, 00:40:43.455 "data_size": 63488 00:40:43.455 } 00:40:43.455 ] 00:40:43.455 }' 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:43.455 05:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:44.389 05:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.648 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:44.648 "name": "raid_bdev1", 00:40:44.648 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:44.648 "strip_size_kb": 0, 00:40:44.648 "state": "online", 00:40:44.648 "raid_level": "raid1", 00:40:44.648 "superblock": true, 00:40:44.648 "num_base_bdevs": 2, 00:40:44.648 "num_base_bdevs_discovered": 2, 00:40:44.648 "num_base_bdevs_operational": 2, 00:40:44.648 "process": { 00:40:44.648 "type": "rebuild", 00:40:44.648 "target": "spare", 00:40:44.648 "progress": { 00:40:44.648 "blocks": 47104, 00:40:44.648 "percent": 74 00:40:44.648 } 00:40:44.648 }, 00:40:44.648 "base_bdevs_list": [ 00:40:44.648 { 00:40:44.648 "name": "spare", 00:40:44.648 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:44.648 "is_configured": true, 00:40:44.648 "data_offset": 2048, 00:40:44.648 "data_size": 63488 00:40:44.648 }, 00:40:44.648 { 00:40:44.648 "name": "BaseBdev2", 00:40:44.648 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:44.648 "is_configured": true, 00:40:44.648 "data_offset": 2048, 00:40:44.648 "data_size": 63488 00:40:44.648 } 00:40:44.648 ] 00:40:44.648 }' 00:40:44.648 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:44.648 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:44.648 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:44.648 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:44.648 05:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:45.216 [2024-12-09 05:31:32.117512] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:45.216 [2024-12-09 05:31:32.117648] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:45.216 [2024-12-09 05:31:32.117847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:45.792 "name": "raid_bdev1", 00:40:45.792 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:45.792 "strip_size_kb": 0, 00:40:45.792 "state": "online", 00:40:45.792 "raid_level": "raid1", 00:40:45.792 "superblock": true, 00:40:45.792 "num_base_bdevs": 2, 00:40:45.792 "num_base_bdevs_discovered": 2, 00:40:45.792 "num_base_bdevs_operational": 2, 00:40:45.792 "base_bdevs_list": [ 00:40:45.792 { 00:40:45.792 "name": "spare", 00:40:45.792 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:45.792 "is_configured": true, 00:40:45.792 "data_offset": 2048, 00:40:45.792 "data_size": 63488 00:40:45.792 }, 00:40:45.792 { 00:40:45.792 "name": "BaseBdev2", 00:40:45.792 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:45.792 "is_configured": true, 00:40:45.792 "data_offset": 2048, 00:40:45.792 "data_size": 63488 00:40:45.792 } 00:40:45.792 ] 00:40:45.792 }' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:45.792 "name": "raid_bdev1", 00:40:45.792 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:45.792 "strip_size_kb": 0, 00:40:45.792 "state": "online", 00:40:45.792 "raid_level": "raid1", 00:40:45.792 "superblock": true, 00:40:45.792 "num_base_bdevs": 2, 00:40:45.792 "num_base_bdevs_discovered": 2, 00:40:45.792 "num_base_bdevs_operational": 2, 00:40:45.792 "base_bdevs_list": [ 00:40:45.792 { 00:40:45.792 "name": "spare", 00:40:45.792 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:45.792 "is_configured": true, 00:40:45.792 "data_offset": 2048, 00:40:45.792 "data_size": 63488 00:40:45.792 }, 00:40:45.792 { 00:40:45.792 "name": "BaseBdev2", 00:40:45.792 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:45.792 "is_configured": true, 00:40:45.792 "data_offset": 2048, 00:40:45.792 "data_size": 63488 00:40:45.792 } 00:40:45.792 ] 00:40:45.792 }' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:45.792 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:46.068 "name": "raid_bdev1", 00:40:46.068 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:46.068 "strip_size_kb": 0, 00:40:46.068 "state": "online", 00:40:46.068 "raid_level": "raid1", 00:40:46.068 "superblock": true, 00:40:46.068 "num_base_bdevs": 2, 00:40:46.068 "num_base_bdevs_discovered": 2, 00:40:46.068 "num_base_bdevs_operational": 2, 00:40:46.068 "base_bdevs_list": [ 00:40:46.068 { 00:40:46.068 "name": "spare", 00:40:46.068 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:46.068 "is_configured": true, 00:40:46.068 "data_offset": 2048, 00:40:46.068 "data_size": 63488 00:40:46.068 }, 00:40:46.068 { 00:40:46.068 "name": "BaseBdev2", 00:40:46.068 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:46.068 "is_configured": true, 00:40:46.068 "data_offset": 2048, 00:40:46.068 "data_size": 63488 00:40:46.068 } 00:40:46.068 ] 00:40:46.068 }' 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:46.068 05:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:46.636 [2024-12-09 05:31:33.339638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:46.636 [2024-12-09 05:31:33.339716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:46.636 [2024-12-09 05:31:33.339854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:46.636 [2024-12-09 05:31:33.339967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:46.636 [2024-12-09 05:31:33.339986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:46.636 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:46.895 /dev/nbd0 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:46.895 1+0 records in 00:40:46.895 1+0 records out 00:40:46.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345842 s, 11.8 MB/s 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:46.895 05:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:40:47.153 /dev/nbd1 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:47.153 1+0 records in 00:40:47.153 1+0 records out 00:40:47.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452734 s, 9.0 MB/s 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:47.153 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:47.411 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:47.669 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:48.236 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:40:48.237 05:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:40:48.237 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.237 05:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.237 [2024-12-09 05:31:35.013283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:48.237 [2024-12-09 05:31:35.013347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:48.237 [2024-12-09 05:31:35.013386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:48.237 [2024-12-09 05:31:35.013404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:48.237 [2024-12-09 05:31:35.016456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:48.237 [2024-12-09 05:31:35.016499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:48.237 [2024-12-09 05:31:35.016619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:48.237 [2024-12-09 05:31:35.016689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:48.237 [2024-12-09 05:31:35.016891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:48.237 spare 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.237 [2024-12-09 05:31:35.117022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:40:48.237 [2024-12-09 05:31:35.117075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:40:48.237 [2024-12-09 05:31:35.117413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:40:48.237 [2024-12-09 05:31:35.117655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:40:48.237 [2024-12-09 05:31:35.117690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:40:48.237 [2024-12-09 05:31:35.117927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:48.237 "name": "raid_bdev1", 00:40:48.237 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:48.237 "strip_size_kb": 0, 00:40:48.237 "state": "online", 00:40:48.237 "raid_level": "raid1", 00:40:48.237 "superblock": true, 00:40:48.237 "num_base_bdevs": 2, 00:40:48.237 "num_base_bdevs_discovered": 2, 00:40:48.237 "num_base_bdevs_operational": 2, 00:40:48.237 "base_bdevs_list": [ 00:40:48.237 { 00:40:48.237 "name": "spare", 00:40:48.237 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:48.237 "is_configured": true, 00:40:48.237 "data_offset": 2048, 00:40:48.237 "data_size": 63488 00:40:48.237 }, 00:40:48.237 { 00:40:48.237 "name": "BaseBdev2", 00:40:48.237 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:48.237 "is_configured": true, 00:40:48.237 "data_offset": 2048, 00:40:48.237 "data_size": 63488 00:40:48.237 } 00:40:48.237 ] 00:40:48.237 }' 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:48.237 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:48.800 "name": "raid_bdev1", 00:40:48.800 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:48.800 "strip_size_kb": 0, 00:40:48.800 "state": "online", 00:40:48.800 "raid_level": "raid1", 00:40:48.800 "superblock": true, 00:40:48.800 "num_base_bdevs": 2, 00:40:48.800 "num_base_bdevs_discovered": 2, 00:40:48.800 "num_base_bdevs_operational": 2, 00:40:48.800 "base_bdevs_list": [ 00:40:48.800 { 00:40:48.800 "name": "spare", 00:40:48.800 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:48.800 "is_configured": true, 00:40:48.800 "data_offset": 2048, 00:40:48.800 "data_size": 63488 00:40:48.800 }, 00:40:48.800 { 00:40:48.800 "name": "BaseBdev2", 00:40:48.800 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:48.800 "is_configured": true, 00:40:48.800 "data_offset": 2048, 00:40:48.800 "data_size": 63488 00:40:48.800 } 00:40:48.800 ] 00:40:48.800 }' 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:48.800 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:40:49.057 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.058 [2024-12-09 05:31:35.846112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:49.058 "name": "raid_bdev1", 00:40:49.058 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:49.058 "strip_size_kb": 0, 00:40:49.058 "state": "online", 00:40:49.058 "raid_level": "raid1", 00:40:49.058 "superblock": true, 00:40:49.058 "num_base_bdevs": 2, 00:40:49.058 "num_base_bdevs_discovered": 1, 00:40:49.058 "num_base_bdevs_operational": 1, 00:40:49.058 "base_bdevs_list": [ 00:40:49.058 { 00:40:49.058 "name": null, 00:40:49.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.058 "is_configured": false, 00:40:49.058 "data_offset": 0, 00:40:49.058 "data_size": 63488 00:40:49.058 }, 00:40:49.058 { 00:40:49.058 "name": "BaseBdev2", 00:40:49.058 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:49.058 "is_configured": true, 00:40:49.058 "data_offset": 2048, 00:40:49.058 "data_size": 63488 00:40:49.058 } 00:40:49.058 ] 00:40:49.058 }' 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:49.058 05:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.621 05:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:49.621 05:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.621 05:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.621 [2024-12-09 05:31:36.398259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:49.621 [2024-12-09 05:31:36.398537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:49.621 [2024-12-09 05:31:36.398573] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:49.621 [2024-12-09 05:31:36.398625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:49.621 [2024-12-09 05:31:36.414524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:40:49.621 05:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.621 05:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:40:49.621 [2024-12-09 05:31:36.417146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:50.553 "name": "raid_bdev1", 00:40:50.553 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:50.553 "strip_size_kb": 0, 00:40:50.553 "state": "online", 00:40:50.553 "raid_level": "raid1", 00:40:50.553 "superblock": true, 00:40:50.553 "num_base_bdevs": 2, 00:40:50.553 "num_base_bdevs_discovered": 2, 00:40:50.553 "num_base_bdevs_operational": 2, 00:40:50.553 "process": { 00:40:50.553 "type": "rebuild", 00:40:50.553 "target": "spare", 00:40:50.553 "progress": { 00:40:50.553 "blocks": 20480, 00:40:50.553 "percent": 32 00:40:50.553 } 00:40:50.553 }, 00:40:50.553 "base_bdevs_list": [ 00:40:50.553 { 00:40:50.553 "name": "spare", 00:40:50.553 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:50.553 "is_configured": true, 00:40:50.553 "data_offset": 2048, 00:40:50.553 "data_size": 63488 00:40:50.553 }, 00:40:50.553 { 00:40:50.553 "name": "BaseBdev2", 00:40:50.553 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:50.553 "is_configured": true, 00:40:50.553 "data_offset": 2048, 00:40:50.553 "data_size": 63488 00:40:50.553 } 00:40:50.553 ] 00:40:50.553 }' 00:40:50.553 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.812 [2024-12-09 05:31:37.592274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:50.812 [2024-12-09 05:31:37.626560] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:50.812 [2024-12-09 05:31:37.627126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:50.812 [2024-12-09 05:31:37.627159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:50.812 [2024-12-09 05:31:37.627177] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:50.812 "name": "raid_bdev1", 00:40:50.812 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:50.812 "strip_size_kb": 0, 00:40:50.812 "state": "online", 00:40:50.812 "raid_level": "raid1", 00:40:50.812 "superblock": true, 00:40:50.812 "num_base_bdevs": 2, 00:40:50.812 "num_base_bdevs_discovered": 1, 00:40:50.812 "num_base_bdevs_operational": 1, 00:40:50.812 "base_bdevs_list": [ 00:40:50.812 { 00:40:50.812 "name": null, 00:40:50.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:50.812 "is_configured": false, 00:40:50.812 "data_offset": 0, 00:40:50.812 "data_size": 63488 00:40:50.812 }, 00:40:50.812 { 00:40:50.812 "name": "BaseBdev2", 00:40:50.812 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:50.812 "is_configured": true, 00:40:50.812 "data_offset": 2048, 00:40:50.812 "data_size": 63488 00:40:50.812 } 00:40:50.812 ] 00:40:50.812 }' 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:50.812 05:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.378 05:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:51.378 05:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.378 05:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.378 [2024-12-09 05:31:38.186796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:51.378 [2024-12-09 05:31:38.186918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:51.378 [2024-12-09 05:31:38.186958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:40:51.378 [2024-12-09 05:31:38.186979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:51.378 [2024-12-09 05:31:38.187725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:51.378 [2024-12-09 05:31:38.187863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:51.378 [2024-12-09 05:31:38.188023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:51.378 [2024-12-09 05:31:38.188050] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:51.378 [2024-12-09 05:31:38.188082] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:51.378 [2024-12-09 05:31:38.188190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:51.378 spare 00:40:51.378 [2024-12-09 05:31:38.205663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:40:51.378 05:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.378 05:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:40:51.378 [2024-12-09 05:31:38.208639] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:52.311 "name": "raid_bdev1", 00:40:52.311 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:52.311 "strip_size_kb": 0, 00:40:52.311 "state": "online", 00:40:52.311 "raid_level": "raid1", 00:40:52.311 "superblock": true, 00:40:52.311 "num_base_bdevs": 2, 00:40:52.311 "num_base_bdevs_discovered": 2, 00:40:52.311 "num_base_bdevs_operational": 2, 00:40:52.311 "process": { 00:40:52.311 "type": "rebuild", 00:40:52.311 "target": "spare", 00:40:52.311 "progress": { 00:40:52.311 "blocks": 20480, 00:40:52.311 "percent": 32 00:40:52.311 } 00:40:52.311 }, 00:40:52.311 "base_bdevs_list": [ 00:40:52.311 { 00:40:52.311 "name": "spare", 00:40:52.311 "uuid": "5e3dc648-4e44-572a-af99-9d8bce7115ee", 00:40:52.311 "is_configured": true, 00:40:52.311 "data_offset": 2048, 00:40:52.311 "data_size": 63488 00:40:52.311 }, 00:40:52.311 { 00:40:52.311 "name": "BaseBdev2", 00:40:52.311 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:52.311 "is_configured": true, 00:40:52.311 "data_offset": 2048, 00:40:52.311 "data_size": 63488 00:40:52.311 } 00:40:52.311 ] 00:40:52.311 }' 00:40:52.311 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.569 [2024-12-09 05:31:39.370315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:52.569 [2024-12-09 05:31:39.418217] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:52.569 [2024-12-09 05:31:39.418549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:52.569 [2024-12-09 05:31:39.418587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:52.569 [2024-12-09 05:31:39.418603] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:52.569 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:52.570 "name": "raid_bdev1", 00:40:52.570 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:52.570 "strip_size_kb": 0, 00:40:52.570 "state": "online", 00:40:52.570 "raid_level": "raid1", 00:40:52.570 "superblock": true, 00:40:52.570 "num_base_bdevs": 2, 00:40:52.570 "num_base_bdevs_discovered": 1, 00:40:52.570 "num_base_bdevs_operational": 1, 00:40:52.570 "base_bdevs_list": [ 00:40:52.570 { 00:40:52.570 "name": null, 00:40:52.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:52.570 "is_configured": false, 00:40:52.570 "data_offset": 0, 00:40:52.570 "data_size": 63488 00:40:52.570 }, 00:40:52.570 { 00:40:52.570 "name": "BaseBdev2", 00:40:52.570 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:52.570 "is_configured": true, 00:40:52.570 "data_offset": 2048, 00:40:52.570 "data_size": 63488 00:40:52.570 } 00:40:52.570 ] 00:40:52.570 }' 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:52.570 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.138 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:53.138 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:53.138 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:53.138 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:53.139 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:53.139 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:53.139 05:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:53.139 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.139 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.139 05:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.139 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:53.139 "name": "raid_bdev1", 00:40:53.139 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:53.139 "strip_size_kb": 0, 00:40:53.139 "state": "online", 00:40:53.139 "raid_level": "raid1", 00:40:53.139 "superblock": true, 00:40:53.139 "num_base_bdevs": 2, 00:40:53.139 "num_base_bdevs_discovered": 1, 00:40:53.139 "num_base_bdevs_operational": 1, 00:40:53.139 "base_bdevs_list": [ 00:40:53.139 { 00:40:53.139 "name": null, 00:40:53.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:53.139 "is_configured": false, 00:40:53.139 "data_offset": 0, 00:40:53.139 "data_size": 63488 00:40:53.139 }, 00:40:53.139 { 00:40:53.139 "name": "BaseBdev2", 00:40:53.139 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:53.139 "is_configured": true, 00:40:53.139 "data_offset": 2048, 00:40:53.139 "data_size": 63488 00:40:53.139 } 00:40:53.139 ] 00:40:53.139 }' 00:40:53.139 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:53.139 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:53.139 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.398 [2024-12-09 05:31:40.172091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:53.398 [2024-12-09 05:31:40.172187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:53.398 [2024-12-09 05:31:40.172235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:40:53.398 [2024-12-09 05:31:40.172265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:53.398 [2024-12-09 05:31:40.173047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:53.398 [2024-12-09 05:31:40.173090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:53.398 [2024-12-09 05:31:40.173215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:53.398 [2024-12-09 05:31:40.173239] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:53.398 [2024-12-09 05:31:40.173256] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:53.398 [2024-12-09 05:31:40.173271] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:40:53.398 BaseBdev1 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.398 05:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:54.333 "name": "raid_bdev1", 00:40:54.333 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:54.333 "strip_size_kb": 0, 00:40:54.333 "state": "online", 00:40:54.333 "raid_level": "raid1", 00:40:54.333 "superblock": true, 00:40:54.333 "num_base_bdevs": 2, 00:40:54.333 "num_base_bdevs_discovered": 1, 00:40:54.333 "num_base_bdevs_operational": 1, 00:40:54.333 "base_bdevs_list": [ 00:40:54.333 { 00:40:54.333 "name": null, 00:40:54.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:54.333 "is_configured": false, 00:40:54.333 "data_offset": 0, 00:40:54.333 "data_size": 63488 00:40:54.333 }, 00:40:54.333 { 00:40:54.333 "name": "BaseBdev2", 00:40:54.333 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:54.333 "is_configured": true, 00:40:54.333 "data_offset": 2048, 00:40:54.333 "data_size": 63488 00:40:54.333 } 00:40:54.333 ] 00:40:54.333 }' 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:54.333 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:54.901 "name": "raid_bdev1", 00:40:54.901 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:54.901 "strip_size_kb": 0, 00:40:54.901 "state": "online", 00:40:54.901 "raid_level": "raid1", 00:40:54.901 "superblock": true, 00:40:54.901 "num_base_bdevs": 2, 00:40:54.901 "num_base_bdevs_discovered": 1, 00:40:54.901 "num_base_bdevs_operational": 1, 00:40:54.901 "base_bdevs_list": [ 00:40:54.901 { 00:40:54.901 "name": null, 00:40:54.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:54.901 "is_configured": false, 00:40:54.901 "data_offset": 0, 00:40:54.901 "data_size": 63488 00:40:54.901 }, 00:40:54.901 { 00:40:54.901 "name": "BaseBdev2", 00:40:54.901 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:54.901 "is_configured": true, 00:40:54.901 "data_offset": 2048, 00:40:54.901 "data_size": 63488 00:40:54.901 } 00:40:54.901 ] 00:40:54.901 }' 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:54.901 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.160 [2024-12-09 05:31:41.889001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:55.160 [2024-12-09 05:31:41.889393] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:55.160 [2024-12-09 05:31:41.889638] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:55.160 request: 00:40:55.160 { 00:40:55.160 "base_bdev": "BaseBdev1", 00:40:55.160 "raid_bdev": "raid_bdev1", 00:40:55.160 "method": "bdev_raid_add_base_bdev", 00:40:55.160 "req_id": 1 00:40:55.160 } 00:40:55.160 Got JSON-RPC error response 00:40:55.160 response: 00:40:55.160 { 00:40:55.160 "code": -22, 00:40:55.160 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:40:55.160 } 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:55.160 05:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:56.098 "name": "raid_bdev1", 00:40:56.098 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:56.098 "strip_size_kb": 0, 00:40:56.098 "state": "online", 00:40:56.098 "raid_level": "raid1", 00:40:56.098 "superblock": true, 00:40:56.098 "num_base_bdevs": 2, 00:40:56.098 "num_base_bdevs_discovered": 1, 00:40:56.098 "num_base_bdevs_operational": 1, 00:40:56.098 "base_bdevs_list": [ 00:40:56.098 { 00:40:56.098 "name": null, 00:40:56.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:56.098 "is_configured": false, 00:40:56.098 "data_offset": 0, 00:40:56.098 "data_size": 63488 00:40:56.098 }, 00:40:56.098 { 00:40:56.098 "name": "BaseBdev2", 00:40:56.098 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:56.098 "is_configured": true, 00:40:56.098 "data_offset": 2048, 00:40:56.098 "data_size": 63488 00:40:56.098 } 00:40:56.098 ] 00:40:56.098 }' 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:56.098 05:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.666 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:56.666 "name": "raid_bdev1", 00:40:56.666 "uuid": "cdc4d671-6846-40d2-bf33-ce536f28a43f", 00:40:56.666 "strip_size_kb": 0, 00:40:56.666 "state": "online", 00:40:56.666 "raid_level": "raid1", 00:40:56.666 "superblock": true, 00:40:56.666 "num_base_bdevs": 2, 00:40:56.666 "num_base_bdevs_discovered": 1, 00:40:56.666 "num_base_bdevs_operational": 1, 00:40:56.666 "base_bdevs_list": [ 00:40:56.666 { 00:40:56.666 "name": null, 00:40:56.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:56.666 "is_configured": false, 00:40:56.666 "data_offset": 0, 00:40:56.666 "data_size": 63488 00:40:56.666 }, 00:40:56.666 { 00:40:56.666 "name": "BaseBdev2", 00:40:56.666 "uuid": "5c940c6b-c89a-548c-a561-123398f28e9a", 00:40:56.666 "is_configured": true, 00:40:56.666 "data_offset": 2048, 00:40:56.666 "data_size": 63488 00:40:56.666 } 00:40:56.666 ] 00:40:56.666 }' 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76029 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76029 ']' 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76029 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.667 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76029 00:40:56.926 killing process with pid 76029 00:40:56.926 Received shutdown signal, test time was about 60.000000 seconds 00:40:56.926 00:40:56.926 Latency(us) 00:40:56.926 [2024-12-09T05:31:43.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:56.926 [2024-12-09T05:31:43.898Z] =================================================================================================================== 00:40:56.926 [2024-12-09T05:31:43.898Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:56.926 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:56.926 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:56.926 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76029' 00:40:56.926 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76029 00:40:56.926 [2024-12-09 05:31:43.644901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:56.926 05:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76029 00:40:56.926 [2024-12-09 05:31:43.645190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:56.926 [2024-12-09 05:31:43.645321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:56.926 [2024-12-09 05:31:43.645352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:40:57.184 [2024-12-09 05:31:43.928586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:58.144 05:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:40:58.144 00:40:58.144 real 0m27.744s 00:40:58.144 user 0m33.802s 00:40:58.144 sys 0m4.399s 00:40:58.144 05:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:58.144 05:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:58.144 ************************************ 00:40:58.144 END TEST raid_rebuild_test_sb 00:40:58.144 ************************************ 00:40:58.144 05:31:45 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:40:58.144 05:31:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:58.144 05:31:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.144 05:31:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:58.144 ************************************ 00:40:58.144 START TEST raid_rebuild_test_io 00:40:58.144 ************************************ 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76803 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76803 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76803 ']' 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:58.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:58.144 05:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:58.403 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:58.403 Zero copy mechanism will not be used. 00:40:58.403 [2024-12-09 05:31:45.174363] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:40:58.403 [2024-12-09 05:31:45.174588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76803 ] 00:40:58.403 [2024-12-09 05:31:45.365862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:58.660 [2024-12-09 05:31:45.488006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.918 [2024-12-09 05:31:45.672218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:58.918 [2024-12-09 05:31:45.672300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 BaseBdev1_malloc 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 [2024-12-09 05:31:46.204985] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:59.485 [2024-12-09 05:31:46.205054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:59.485 [2024-12-09 05:31:46.205112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:59.485 [2024-12-09 05:31:46.205130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:59.485 [2024-12-09 05:31:46.207830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:59.485 [2024-12-09 05:31:46.208037] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:59.485 BaseBdev1 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 BaseBdev2_malloc 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 [2024-12-09 05:31:46.253755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:59.485 [2024-12-09 05:31:46.254033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:59.485 [2024-12-09 05:31:46.254075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:59.485 [2024-12-09 05:31:46.254093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:59.485 [2024-12-09 05:31:46.256829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:59.485 [2024-12-09 05:31:46.256884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:59.485 BaseBdev2 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 spare_malloc 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 spare_delay 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 [2024-12-09 05:31:46.318096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:59.485 [2024-12-09 05:31:46.318177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:59.485 [2024-12-09 05:31:46.318204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:40:59.485 [2024-12-09 05:31:46.318220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:59.485 [2024-12-09 05:31:46.321264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:59.485 [2024-12-09 05:31:46.321310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:59.485 spare 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.485 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.485 [2024-12-09 05:31:46.326266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:59.485 [2024-12-09 05:31:46.328987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:59.485 [2024-12-09 05:31:46.329271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:59.485 [2024-12-09 05:31:46.329302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:40:59.485 [2024-12-09 05:31:46.329638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:59.486 [2024-12-09 05:31:46.329906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:59.486 [2024-12-09 05:31:46.329926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:59.486 [2024-12-09 05:31:46.330210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:59.486 "name": "raid_bdev1", 00:40:59.486 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:40:59.486 "strip_size_kb": 0, 00:40:59.486 "state": "online", 00:40:59.486 "raid_level": "raid1", 00:40:59.486 "superblock": false, 00:40:59.486 "num_base_bdevs": 2, 00:40:59.486 "num_base_bdevs_discovered": 2, 00:40:59.486 "num_base_bdevs_operational": 2, 00:40:59.486 "base_bdevs_list": [ 00:40:59.486 { 00:40:59.486 "name": "BaseBdev1", 00:40:59.486 "uuid": "411402c0-25ad-50ae-9b33-38efdc236979", 00:40:59.486 "is_configured": true, 00:40:59.486 "data_offset": 0, 00:40:59.486 "data_size": 65536 00:40:59.486 }, 00:40:59.486 { 00:40:59.486 "name": "BaseBdev2", 00:40:59.486 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:40:59.486 "is_configured": true, 00:40:59.486 "data_offset": 0, 00:40:59.486 "data_size": 65536 00:40:59.486 } 00:40:59.486 ] 00:40:59.486 }' 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:59.486 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.052 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:00.052 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:00.052 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.052 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.052 [2024-12-09 05:31:46.862880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.053 [2024-12-09 05:31:46.962428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.053 05:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.311 05:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:00.311 "name": "raid_bdev1", 00:41:00.311 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:00.311 "strip_size_kb": 0, 00:41:00.311 "state": "online", 00:41:00.311 "raid_level": "raid1", 00:41:00.311 "superblock": false, 00:41:00.311 "num_base_bdevs": 2, 00:41:00.311 "num_base_bdevs_discovered": 1, 00:41:00.311 "num_base_bdevs_operational": 1, 00:41:00.311 "base_bdevs_list": [ 00:41:00.311 { 00:41:00.311 "name": null, 00:41:00.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:00.311 "is_configured": false, 00:41:00.311 "data_offset": 0, 00:41:00.311 "data_size": 65536 00:41:00.311 }, 00:41:00.311 { 00:41:00.311 "name": "BaseBdev2", 00:41:00.311 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:00.311 "is_configured": true, 00:41:00.311 "data_offset": 0, 00:41:00.311 "data_size": 65536 00:41:00.311 } 00:41:00.311 ] 00:41:00.311 }' 00:41:00.311 05:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:00.311 05:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.311 [2024-12-09 05:31:47.095681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:41:00.311 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:00.311 Zero copy mechanism will not be used. 00:41:00.311 Running I/O for 60 seconds... 00:41:00.877 05:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:00.877 05:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.877 05:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:00.877 [2024-12-09 05:31:47.552997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:00.877 05:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.877 05:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:41:00.877 [2024-12-09 05:31:47.610701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:00.877 [2024-12-09 05:31:47.613883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:00.877 [2024-12-09 05:31:47.739971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:00.877 [2024-12-09 05:31:47.740986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:01.135 [2024-12-09 05:31:47.961131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:01.135 [2024-12-09 05:31:47.961602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:01.392 198.00 IOPS, 594.00 MiB/s [2024-12-09T05:31:48.364Z] [2024-12-09 05:31:48.307564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:01.650 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.908 [2024-12-09 05:31:48.637871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:01.908 "name": "raid_bdev1", 00:41:01.908 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:01.908 "strip_size_kb": 0, 00:41:01.908 "state": "online", 00:41:01.908 "raid_level": "raid1", 00:41:01.908 "superblock": false, 00:41:01.908 "num_base_bdevs": 2, 00:41:01.908 "num_base_bdevs_discovered": 2, 00:41:01.908 "num_base_bdevs_operational": 2, 00:41:01.908 "process": { 00:41:01.908 "type": "rebuild", 00:41:01.908 "target": "spare", 00:41:01.908 "progress": { 00:41:01.908 "blocks": 12288, 00:41:01.908 "percent": 18 00:41:01.908 } 00:41:01.908 }, 00:41:01.908 "base_bdevs_list": [ 00:41:01.908 { 00:41:01.908 "name": "spare", 00:41:01.908 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:01.908 "is_configured": true, 00:41:01.908 "data_offset": 0, 00:41:01.908 "data_size": 65536 00:41:01.908 }, 00:41:01.908 { 00:41:01.908 "name": "BaseBdev2", 00:41:01.908 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:01.908 "is_configured": true, 00:41:01.908 "data_offset": 0, 00:41:01.908 "data_size": 65536 00:41:01.908 } 00:41:01.908 ] 00:41:01.908 }' 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.908 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:01.908 [2024-12-09 05:31:48.757334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:01.908 [2024-12-09 05:31:48.834325] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:01.908 [2024-12-09 05:31:48.845746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:01.908 [2024-12-09 05:31:48.845807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:01.908 [2024-12-09 05:31:48.845829] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:02.165 [2024-12-09 05:31:48.882701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:02.165 "name": "raid_bdev1", 00:41:02.165 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:02.165 "strip_size_kb": 0, 00:41:02.165 "state": "online", 00:41:02.165 "raid_level": "raid1", 00:41:02.165 "superblock": false, 00:41:02.165 "num_base_bdevs": 2, 00:41:02.165 "num_base_bdevs_discovered": 1, 00:41:02.165 "num_base_bdevs_operational": 1, 00:41:02.165 "base_bdevs_list": [ 00:41:02.165 { 00:41:02.165 "name": null, 00:41:02.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:02.165 "is_configured": false, 00:41:02.165 "data_offset": 0, 00:41:02.165 "data_size": 65536 00:41:02.165 }, 00:41:02.165 { 00:41:02.165 "name": "BaseBdev2", 00:41:02.165 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:02.165 "is_configured": true, 00:41:02.165 "data_offset": 0, 00:41:02.165 "data_size": 65536 00:41:02.165 } 00:41:02.165 ] 00:41:02.165 }' 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:02.165 05:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:02.729 151.50 IOPS, 454.50 MiB/s [2024-12-09T05:31:49.701Z] 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.729 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:02.729 "name": "raid_bdev1", 00:41:02.729 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:02.729 "strip_size_kb": 0, 00:41:02.729 "state": "online", 00:41:02.729 "raid_level": "raid1", 00:41:02.729 "superblock": false, 00:41:02.729 "num_base_bdevs": 2, 00:41:02.729 "num_base_bdevs_discovered": 1, 00:41:02.729 "num_base_bdevs_operational": 1, 00:41:02.729 "base_bdevs_list": [ 00:41:02.729 { 00:41:02.729 "name": null, 00:41:02.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:02.730 "is_configured": false, 00:41:02.730 "data_offset": 0, 00:41:02.730 "data_size": 65536 00:41:02.730 }, 00:41:02.730 { 00:41:02.730 "name": "BaseBdev2", 00:41:02.730 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:02.730 "is_configured": true, 00:41:02.730 "data_offset": 0, 00:41:02.730 "data_size": 65536 00:41:02.730 } 00:41:02.730 ] 00:41:02.730 }' 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:02.730 [2024-12-09 05:31:49.603816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.730 05:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:41:02.730 [2024-12-09 05:31:49.683076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:41:02.730 [2024-12-09 05:31:49.686035] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:03.295 [2024-12-09 05:31:49.979718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:03.553 166.33 IOPS, 499.00 MiB/s [2024-12-09T05:31:50.525Z] [2024-12-09 05:31:50.435299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:03.554 [2024-12-09 05:31:50.436014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:03.812 [2024-12-09 05:31:50.673530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.812 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:03.812 "name": "raid_bdev1", 00:41:03.812 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:03.812 "strip_size_kb": 0, 00:41:03.812 "state": "online", 00:41:03.813 "raid_level": "raid1", 00:41:03.813 "superblock": false, 00:41:03.813 "num_base_bdevs": 2, 00:41:03.813 "num_base_bdevs_discovered": 2, 00:41:03.813 "num_base_bdevs_operational": 2, 00:41:03.813 "process": { 00:41:03.813 "type": "rebuild", 00:41:03.813 "target": "spare", 00:41:03.813 "progress": { 00:41:03.813 "blocks": 12288, 00:41:03.813 "percent": 18 00:41:03.813 } 00:41:03.813 }, 00:41:03.813 "base_bdevs_list": [ 00:41:03.813 { 00:41:03.813 "name": "spare", 00:41:03.813 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:03.813 "is_configured": true, 00:41:03.813 "data_offset": 0, 00:41:03.813 "data_size": 65536 00:41:03.813 }, 00:41:03.813 { 00:41:03.813 "name": "BaseBdev2", 00:41:03.813 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:03.813 "is_configured": true, 00:41:03.813 "data_offset": 0, 00:41:03.813 "data_size": 65536 00:41:03.813 } 00:41:03.813 ] 00:41:03.813 }' 00:41:03.813 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:03.813 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:03.813 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:04.071 [2024-12-09 05:31:50.785533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:41:04.071 [2024-12-09 05:31:50.785869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=446 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:04.071 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:04.072 "name": "raid_bdev1", 00:41:04.072 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:04.072 "strip_size_kb": 0, 00:41:04.072 "state": "online", 00:41:04.072 "raid_level": "raid1", 00:41:04.072 "superblock": false, 00:41:04.072 "num_base_bdevs": 2, 00:41:04.072 "num_base_bdevs_discovered": 2, 00:41:04.072 "num_base_bdevs_operational": 2, 00:41:04.072 "process": { 00:41:04.072 "type": "rebuild", 00:41:04.072 "target": "spare", 00:41:04.072 "progress": { 00:41:04.072 "blocks": 16384, 00:41:04.072 "percent": 25 00:41:04.072 } 00:41:04.072 }, 00:41:04.072 "base_bdevs_list": [ 00:41:04.072 { 00:41:04.072 "name": "spare", 00:41:04.072 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:04.072 "is_configured": true, 00:41:04.072 "data_offset": 0, 00:41:04.072 "data_size": 65536 00:41:04.072 }, 00:41:04.072 { 00:41:04.072 "name": "BaseBdev2", 00:41:04.072 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:04.072 "is_configured": true, 00:41:04.072 "data_offset": 0, 00:41:04.072 "data_size": 65536 00:41:04.072 } 00:41:04.072 ] 00:41:04.072 }' 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:04.072 05:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:04.330 149.25 IOPS, 447.75 MiB/s [2024-12-09T05:31:51.302Z] [2024-12-09 05:31:51.247690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:41:04.330 [2024-12-09 05:31:51.248479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:41:04.898 [2024-12-09 05:31:51.726760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.157 05:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:05.157 05:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.157 05:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:05.157 "name": "raid_bdev1", 00:41:05.157 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:05.157 "strip_size_kb": 0, 00:41:05.157 "state": "online", 00:41:05.157 "raid_level": "raid1", 00:41:05.157 "superblock": false, 00:41:05.157 "num_base_bdevs": 2, 00:41:05.157 "num_base_bdevs_discovered": 2, 00:41:05.157 "num_base_bdevs_operational": 2, 00:41:05.157 "process": { 00:41:05.157 "type": "rebuild", 00:41:05.157 "target": "spare", 00:41:05.157 "progress": { 00:41:05.157 "blocks": 30720, 00:41:05.157 "percent": 46 00:41:05.157 } 00:41:05.157 }, 00:41:05.157 "base_bdevs_list": [ 00:41:05.157 { 00:41:05.157 "name": "spare", 00:41:05.157 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:05.157 "is_configured": true, 00:41:05.157 "data_offset": 0, 00:41:05.157 "data_size": 65536 00:41:05.157 }, 00:41:05.157 { 00:41:05.157 "name": "BaseBdev2", 00:41:05.157 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:05.157 "is_configured": true, 00:41:05.157 "data_offset": 0, 00:41:05.157 "data_size": 65536 00:41:05.157 } 00:41:05.157 ] 00:41:05.157 }' 00:41:05.157 05:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:05.157 [2024-12-09 05:31:52.049685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:41:05.157 05:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:05.157 05:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:05.415 127.40 IOPS, 382.20 MiB/s [2024-12-09T05:31:52.387Z] 05:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:05.415 05:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:05.674 [2024-12-09 05:31:52.513662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:41:05.932 [2024-12-09 05:31:52.892830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:41:06.190 114.50 IOPS, 343.50 MiB/s [2024-12-09T05:31:53.162Z] 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:06.190 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:06.450 "name": "raid_bdev1", 00:41:06.450 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:06.450 "strip_size_kb": 0, 00:41:06.450 "state": "online", 00:41:06.450 "raid_level": "raid1", 00:41:06.450 "superblock": false, 00:41:06.450 "num_base_bdevs": 2, 00:41:06.450 "num_base_bdevs_discovered": 2, 00:41:06.450 "num_base_bdevs_operational": 2, 00:41:06.450 "process": { 00:41:06.450 "type": "rebuild", 00:41:06.450 "target": "spare", 00:41:06.450 "progress": { 00:41:06.450 "blocks": 49152, 00:41:06.450 "percent": 75 00:41:06.450 } 00:41:06.450 }, 00:41:06.450 "base_bdevs_list": [ 00:41:06.450 { 00:41:06.450 "name": "spare", 00:41:06.450 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:06.450 "is_configured": true, 00:41:06.450 "data_offset": 0, 00:41:06.450 "data_size": 65536 00:41:06.450 }, 00:41:06.450 { 00:41:06.450 "name": "BaseBdev2", 00:41:06.450 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:06.450 "is_configured": true, 00:41:06.450 "data_offset": 0, 00:41:06.450 "data_size": 65536 00:41:06.450 } 00:41:06.450 ] 00:41:06.450 }' 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:06.450 05:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:07.385 [2024-12-09 05:31:54.017005] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:07.385 103.43 IOPS, 310.29 MiB/s [2024-12-09T05:31:54.357Z] [2024-12-09 05:31:54.117084] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:07.385 [2024-12-09 05:31:54.119778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:07.385 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:07.643 "name": "raid_bdev1", 00:41:07.643 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:07.643 "strip_size_kb": 0, 00:41:07.643 "state": "online", 00:41:07.643 "raid_level": "raid1", 00:41:07.643 "superblock": false, 00:41:07.643 "num_base_bdevs": 2, 00:41:07.643 "num_base_bdevs_discovered": 2, 00:41:07.643 "num_base_bdevs_operational": 2, 00:41:07.643 "base_bdevs_list": [ 00:41:07.643 { 00:41:07.643 "name": "spare", 00:41:07.643 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:07.643 "is_configured": true, 00:41:07.643 "data_offset": 0, 00:41:07.643 "data_size": 65536 00:41:07.643 }, 00:41:07.643 { 00:41:07.643 "name": "BaseBdev2", 00:41:07.643 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:07.643 "is_configured": true, 00:41:07.643 "data_offset": 0, 00:41:07.643 "data_size": 65536 00:41:07.643 } 00:41:07.643 ] 00:41:07.643 }' 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.643 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:07.643 "name": "raid_bdev1", 00:41:07.643 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:07.643 "strip_size_kb": 0, 00:41:07.643 "state": "online", 00:41:07.643 "raid_level": "raid1", 00:41:07.643 "superblock": false, 00:41:07.644 "num_base_bdevs": 2, 00:41:07.644 "num_base_bdevs_discovered": 2, 00:41:07.644 "num_base_bdevs_operational": 2, 00:41:07.644 "base_bdevs_list": [ 00:41:07.644 { 00:41:07.644 "name": "spare", 00:41:07.644 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:07.644 "is_configured": true, 00:41:07.644 "data_offset": 0, 00:41:07.644 "data_size": 65536 00:41:07.644 }, 00:41:07.644 { 00:41:07.644 "name": "BaseBdev2", 00:41:07.644 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:07.644 "is_configured": true, 00:41:07.644 "data_offset": 0, 00:41:07.644 "data_size": 65536 00:41:07.644 } 00:41:07.644 ] 00:41:07.644 }' 00:41:07.644 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:07.644 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:07.644 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:07.903 "name": "raid_bdev1", 00:41:07.903 "uuid": "40a38bfc-6f3d-4c86-98f4-b29eb0e30830", 00:41:07.903 "strip_size_kb": 0, 00:41:07.903 "state": "online", 00:41:07.903 "raid_level": "raid1", 00:41:07.903 "superblock": false, 00:41:07.903 "num_base_bdevs": 2, 00:41:07.903 "num_base_bdevs_discovered": 2, 00:41:07.903 "num_base_bdevs_operational": 2, 00:41:07.903 "base_bdevs_list": [ 00:41:07.903 { 00:41:07.903 "name": "spare", 00:41:07.903 "uuid": "e23c691e-846a-5411-9096-917e6c3f2b8f", 00:41:07.903 "is_configured": true, 00:41:07.903 "data_offset": 0, 00:41:07.903 "data_size": 65536 00:41:07.903 }, 00:41:07.903 { 00:41:07.903 "name": "BaseBdev2", 00:41:07.903 "uuid": "e3d25c0a-9df4-5a16-85f6-3ca22abaaa75", 00:41:07.903 "is_configured": true, 00:41:07.903 "data_offset": 0, 00:41:07.903 "data_size": 65536 00:41:07.903 } 00:41:07.903 ] 00:41:07.903 }' 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:07.903 05:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:08.419 95.25 IOPS, 285.75 MiB/s [2024-12-09T05:31:55.391Z] 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:08.419 [2024-12-09 05:31:55.163649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:08.419 [2024-12-09 05:31:55.163685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:08.419 00:41:08.419 Latency(us) 00:41:08.419 [2024-12-09T05:31:55.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.419 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:41:08.419 raid_bdev1 : 8.15 94.37 283.12 0.00 0.00 14763.22 305.34 126782.37 00:41:08.419 [2024-12-09T05:31:55.391Z] =================================================================================================================== 00:41:08.419 [2024-12-09T05:31:55.391Z] Total : 94.37 283.12 0.00 0.00 14763.22 305.34 126782.37 00:41:08.419 [2024-12-09 05:31:55.266744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:08.419 [2024-12-09 05:31:55.266860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:08.419 [2024-12-09 05:31:55.266999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:08.419 [2024-12-09 05:31:55.267021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:08.419 { 00:41:08.419 "results": [ 00:41:08.419 { 00:41:08.419 "job": "raid_bdev1", 00:41:08.419 "core_mask": "0x1", 00:41:08.419 "workload": "randrw", 00:41:08.419 "percentage": 50, 00:41:08.419 "status": "finished", 00:41:08.419 "queue_depth": 2, 00:41:08.419 "io_size": 3145728, 00:41:08.419 "runtime": 8.14839, 00:41:08.419 "iops": 94.37447152136802, 00:41:08.419 "mibps": 283.12341456410405, 00:41:08.419 "io_failed": 0, 00:41:08.419 "io_timeout": 0, 00:41:08.419 "avg_latency_us": 14763.22069275328, 00:41:08.419 "min_latency_us": 305.3381818181818, 00:41:08.419 "max_latency_us": 126782.37090909091 00:41:08.419 } 00:41:08.419 ], 00:41:08.419 "core_count": 1 00:41:08.419 } 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:08.419 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:41:08.677 /dev/nbd0 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:08.677 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:08.935 1+0 records in 00:41:08.935 1+0 records out 00:41:08.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351265 s, 11.7 MB/s 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:08.935 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:41:08.935 /dev/nbd1 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:09.215 1+0 records in 00:41:09.215 1+0 records out 00:41:09.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294882 s, 13.9 MB/s 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:41:09.215 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:09.216 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:09.216 05:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:41:09.216 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:09.216 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:09.216 05:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:09.216 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:41:09.494 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:09.495 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76803 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76803 ']' 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76803 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:09.752 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76803 00:41:10.010 killing process with pid 76803 00:41:10.010 Received shutdown signal, test time was about 9.634857 seconds 00:41:10.010 00:41:10.010 Latency(us) 00:41:10.010 [2024-12-09T05:31:56.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.010 [2024-12-09T05:31:56.982Z] =================================================================================================================== 00:41:10.010 [2024-12-09T05:31:56.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:10.011 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:10.011 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:10.011 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76803' 00:41:10.011 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76803 00:41:10.011 05:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76803 00:41:10.011 [2024-12-09 05:31:56.734036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:10.011 [2024-12-09 05:31:56.926049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:41:11.435 00:41:11.435 real 0m13.035s 00:41:11.435 user 0m17.101s 00:41:11.435 sys 0m1.453s 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:41:11.435 ************************************ 00:41:11.435 END TEST raid_rebuild_test_io 00:41:11.435 ************************************ 00:41:11.435 05:31:58 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:41:11.435 05:31:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:11.435 05:31:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:11.435 05:31:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:11.435 ************************************ 00:41:11.435 START TEST raid_rebuild_test_sb_io 00:41:11.435 ************************************ 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:41:11.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77186 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77186 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77186 ']' 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:11.435 05:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:11.435 [2024-12-09 05:31:58.266731] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:11.435 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:11.435 Zero copy mechanism will not be used. 00:41:11.435 [2024-12-09 05:31:58.267230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77186 ] 00:41:11.693 [2024-12-09 05:31:58.464930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.693 [2024-12-09 05:31:58.637349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.951 [2024-12-09 05:31:58.866316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:11.951 [2024-12-09 05:31:58.866393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 BaseBdev1_malloc 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 [2024-12-09 05:31:59.278438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:12.515 [2024-12-09 05:31:59.278694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:12.515 [2024-12-09 05:31:59.278798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:12.515 [2024-12-09 05:31:59.279106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:12.515 [2024-12-09 05:31:59.282220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:12.515 BaseBdev1 00:41:12.515 [2024-12-09 05:31:59.282411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 BaseBdev2_malloc 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 [2024-12-09 05:31:59.333921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:12.515 [2024-12-09 05:31:59.334014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:12.515 [2024-12-09 05:31:59.334048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:12.515 [2024-12-09 05:31:59.334067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:12.515 [2024-12-09 05:31:59.337217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:12.515 [2024-12-09 05:31:59.337260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:12.515 BaseBdev2 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 spare_malloc 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 spare_delay 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 [2024-12-09 05:31:59.409519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:12.515 [2024-12-09 05:31:59.409740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:12.515 [2024-12-09 05:31:59.409809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:41:12.515 [2024-12-09 05:31:59.409858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:12.515 [2024-12-09 05:31:59.413089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:12.515 [2024-12-09 05:31:59.413185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:12.515 spare 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.515 [2024-12-09 05:31:59.421605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:12.515 [2024-12-09 05:31:59.424527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:12.515 [2024-12-09 05:31:59.424950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:12.515 [2024-12-09 05:31:59.425078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:41:12.515 [2024-12-09 05:31:59.425517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:41:12.515 [2024-12-09 05:31:59.425936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:12.515 [2024-12-09 05:31:59.426067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:12.515 [2024-12-09 05:31:59.426529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:12.515 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:12.516 "name": "raid_bdev1", 00:41:12.516 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:12.516 "strip_size_kb": 0, 00:41:12.516 "state": "online", 00:41:12.516 "raid_level": "raid1", 00:41:12.516 "superblock": true, 00:41:12.516 "num_base_bdevs": 2, 00:41:12.516 "num_base_bdevs_discovered": 2, 00:41:12.516 "num_base_bdevs_operational": 2, 00:41:12.516 "base_bdevs_list": [ 00:41:12.516 { 00:41:12.516 "name": "BaseBdev1", 00:41:12.516 "uuid": "fd276204-5be0-529a-8c5c-8c169eb8949a", 00:41:12.516 "is_configured": true, 00:41:12.516 "data_offset": 2048, 00:41:12.516 "data_size": 63488 00:41:12.516 }, 00:41:12.516 { 00:41:12.516 "name": "BaseBdev2", 00:41:12.516 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:12.516 "is_configured": true, 00:41:12.516 "data_offset": 2048, 00:41:12.516 "data_size": 63488 00:41:12.516 } 00:41:12.516 ] 00:41:12.516 }' 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:12.516 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.081 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:13.081 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:13.081 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.081 05:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.081 [2024-12-09 05:31:59.987166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.081 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.338 [2024-12-09 05:32:00.094755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:13.338 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:13.339 "name": "raid_bdev1", 00:41:13.339 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:13.339 "strip_size_kb": 0, 00:41:13.339 "state": "online", 00:41:13.339 "raid_level": "raid1", 00:41:13.339 "superblock": true, 00:41:13.339 "num_base_bdevs": 2, 00:41:13.339 "num_base_bdevs_discovered": 1, 00:41:13.339 "num_base_bdevs_operational": 1, 00:41:13.339 "base_bdevs_list": [ 00:41:13.339 { 00:41:13.339 "name": null, 00:41:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:13.339 "is_configured": false, 00:41:13.339 "data_offset": 0, 00:41:13.339 "data_size": 63488 00:41:13.339 }, 00:41:13.339 { 00:41:13.339 "name": "BaseBdev2", 00:41:13.339 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:13.339 "is_configured": true, 00:41:13.339 "data_offset": 2048, 00:41:13.339 "data_size": 63488 00:41:13.339 } 00:41:13.339 ] 00:41:13.339 }' 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:13.339 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.339 [2024-12-09 05:32:00.223467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:41:13.339 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:13.339 Zero copy mechanism will not be used. 00:41:13.339 Running I/O for 60 seconds... 00:41:13.906 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:13.906 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.906 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:13.906 [2024-12-09 05:32:00.638573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:13.906 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.906 05:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:41:13.906 [2024-12-09 05:32:00.713559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:13.906 [2024-12-09 05:32:00.716661] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:13.906 [2024-12-09 05:32:00.835764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:13.906 [2024-12-09 05:32:00.836666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:14.165 [2024-12-09 05:32:00.957707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:14.165 [2024-12-09 05:32:00.958172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:14.423 132.00 IOPS, 396.00 MiB/s [2024-12-09T05:32:01.395Z] [2024-12-09 05:32:01.308217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:41:14.681 [2024-12-09 05:32:01.445721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:14.681 [2024-12-09 05:32:01.446147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:14.940 "name": "raid_bdev1", 00:41:14.940 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:14.940 "strip_size_kb": 0, 00:41:14.940 "state": "online", 00:41:14.940 "raid_level": "raid1", 00:41:14.940 "superblock": true, 00:41:14.940 "num_base_bdevs": 2, 00:41:14.940 "num_base_bdevs_discovered": 2, 00:41:14.940 "num_base_bdevs_operational": 2, 00:41:14.940 "process": { 00:41:14.940 "type": "rebuild", 00:41:14.940 "target": "spare", 00:41:14.940 "progress": { 00:41:14.940 "blocks": 12288, 00:41:14.940 "percent": 19 00:41:14.940 } 00:41:14.940 }, 00:41:14.940 "base_bdevs_list": [ 00:41:14.940 { 00:41:14.940 "name": "spare", 00:41:14.940 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:14.940 "is_configured": true, 00:41:14.940 "data_offset": 2048, 00:41:14.940 "data_size": 63488 00:41:14.940 }, 00:41:14.940 { 00:41:14.940 "name": "BaseBdev2", 00:41:14.940 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:14.940 "is_configured": true, 00:41:14.940 "data_offset": 2048, 00:41:14.940 "data_size": 63488 00:41:14.940 } 00:41:14.940 ] 00:41:14.940 }' 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:14.940 [2024-12-09 05:32:01.778858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.940 05:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:14.940 [2024-12-09 05:32:01.869912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:14.940 [2024-12-09 05:32:01.898183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:41:15.199 [2024-12-09 05:32:02.000115] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:15.199 [2024-12-09 05:32:02.002921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:15.199 [2024-12-09 05:32:02.002966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:15.199 [2024-12-09 05:32:02.002980] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:15.199 [2024-12-09 05:32:02.032569] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.199 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:15.199 "name": "raid_bdev1", 00:41:15.199 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:15.199 "strip_size_kb": 0, 00:41:15.199 "state": "online", 00:41:15.199 "raid_level": "raid1", 00:41:15.199 "superblock": true, 00:41:15.199 "num_base_bdevs": 2, 00:41:15.199 "num_base_bdevs_discovered": 1, 00:41:15.199 "num_base_bdevs_operational": 1, 00:41:15.199 "base_bdevs_list": [ 00:41:15.199 { 00:41:15.200 "name": null, 00:41:15.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:15.200 "is_configured": false, 00:41:15.200 "data_offset": 0, 00:41:15.200 "data_size": 63488 00:41:15.200 }, 00:41:15.200 { 00:41:15.200 "name": "BaseBdev2", 00:41:15.200 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:15.200 "is_configured": true, 00:41:15.200 "data_offset": 2048, 00:41:15.200 "data_size": 63488 00:41:15.200 } 00:41:15.200 ] 00:41:15.200 }' 00:41:15.200 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:15.200 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:15.717 115.00 IOPS, 345.00 MiB/s [2024-12-09T05:32:02.689Z] 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:15.717 "name": "raid_bdev1", 00:41:15.717 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:15.717 "strip_size_kb": 0, 00:41:15.717 "state": "online", 00:41:15.717 "raid_level": "raid1", 00:41:15.717 "superblock": true, 00:41:15.717 "num_base_bdevs": 2, 00:41:15.717 "num_base_bdevs_discovered": 1, 00:41:15.717 "num_base_bdevs_operational": 1, 00:41:15.717 "base_bdevs_list": [ 00:41:15.717 { 00:41:15.717 "name": null, 00:41:15.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:15.717 "is_configured": false, 00:41:15.717 "data_offset": 0, 00:41:15.717 "data_size": 63488 00:41:15.717 }, 00:41:15.717 { 00:41:15.717 "name": "BaseBdev2", 00:41:15.717 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:15.717 "is_configured": true, 00:41:15.717 "data_offset": 2048, 00:41:15.717 "data_size": 63488 00:41:15.717 } 00:41:15.717 ] 00:41:15.717 }' 00:41:15.717 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:15.976 [2024-12-09 05:32:02.758568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.976 05:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:41:15.976 [2024-12-09 05:32:02.833220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:41:15.976 [2024-12-09 05:32:02.835588] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:16.234 [2024-12-09 05:32:02.965587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:16.234 [2024-12-09 05:32:03.187933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:16.234 [2024-12-09 05:32:03.188333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:16.750 150.00 IOPS, 450.00 MiB/s [2024-12-09T05:32:03.722Z] [2024-12-09 05:32:03.652240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:17.009 "name": "raid_bdev1", 00:41:17.009 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:17.009 "strip_size_kb": 0, 00:41:17.009 "state": "online", 00:41:17.009 "raid_level": "raid1", 00:41:17.009 "superblock": true, 00:41:17.009 "num_base_bdevs": 2, 00:41:17.009 "num_base_bdevs_discovered": 2, 00:41:17.009 "num_base_bdevs_operational": 2, 00:41:17.009 "process": { 00:41:17.009 "type": "rebuild", 00:41:17.009 "target": "spare", 00:41:17.009 "progress": { 00:41:17.009 "blocks": 12288, 00:41:17.009 "percent": 19 00:41:17.009 } 00:41:17.009 }, 00:41:17.009 "base_bdevs_list": [ 00:41:17.009 { 00:41:17.009 "name": "spare", 00:41:17.009 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:17.009 "is_configured": true, 00:41:17.009 "data_offset": 2048, 00:41:17.009 "data_size": 63488 00:41:17.009 }, 00:41:17.009 { 00:41:17.009 "name": "BaseBdev2", 00:41:17.009 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:17.009 "is_configured": true, 00:41:17.009 "data_offset": 2048, 00:41:17.009 "data_size": 63488 00:41:17.009 } 00:41:17.009 ] 00:41:17.009 }' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:17.009 [2024-12-09 05:32:03.898037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:41:17.009 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:17.009 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:17.268 05:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.268 05:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:17.268 "name": "raid_bdev1", 00:41:17.268 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:17.268 "strip_size_kb": 0, 00:41:17.268 "state": "online", 00:41:17.268 "raid_level": "raid1", 00:41:17.268 "superblock": true, 00:41:17.268 "num_base_bdevs": 2, 00:41:17.268 "num_base_bdevs_discovered": 2, 00:41:17.268 "num_base_bdevs_operational": 2, 00:41:17.268 "process": { 00:41:17.268 "type": "rebuild", 00:41:17.268 "target": "spare", 00:41:17.268 "progress": { 00:41:17.268 "blocks": 14336, 00:41:17.268 "percent": 22 00:41:17.268 } 00:41:17.268 }, 00:41:17.268 "base_bdevs_list": [ 00:41:17.268 { 00:41:17.268 "name": "spare", 00:41:17.268 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:17.268 "is_configured": true, 00:41:17.268 "data_offset": 2048, 00:41:17.268 "data_size": 63488 00:41:17.268 }, 00:41:17.268 { 00:41:17.268 "name": "BaseBdev2", 00:41:17.268 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:17.268 "is_configured": true, 00:41:17.268 "data_offset": 2048, 00:41:17.268 "data_size": 63488 00:41:17.268 } 00:41:17.268 ] 00:41:17.268 }' 00:41:17.268 05:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:17.268 05:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:17.268 05:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:17.268 05:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:17.268 05:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:17.268 [2024-12-09 05:32:04.149464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:41:17.785 134.25 IOPS, 402.75 MiB/s [2024-12-09T05:32:04.757Z] [2024-12-09 05:32:04.511209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:41:17.785 [2024-12-09 05:32:04.754386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:41:17.785 [2024-12-09 05:32:04.755082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:18.351 "name": "raid_bdev1", 00:41:18.351 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:18.351 "strip_size_kb": 0, 00:41:18.351 "state": "online", 00:41:18.351 "raid_level": "raid1", 00:41:18.351 "superblock": true, 00:41:18.351 "num_base_bdevs": 2, 00:41:18.351 "num_base_bdevs_discovered": 2, 00:41:18.351 "num_base_bdevs_operational": 2, 00:41:18.351 "process": { 00:41:18.351 "type": "rebuild", 00:41:18.351 "target": "spare", 00:41:18.351 "progress": { 00:41:18.351 "blocks": 26624, 00:41:18.351 "percent": 41 00:41:18.351 } 00:41:18.351 }, 00:41:18.351 "base_bdevs_list": [ 00:41:18.351 { 00:41:18.351 "name": "spare", 00:41:18.351 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:18.351 "is_configured": true, 00:41:18.351 "data_offset": 2048, 00:41:18.351 "data_size": 63488 00:41:18.351 }, 00:41:18.351 { 00:41:18.351 "name": "BaseBdev2", 00:41:18.351 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:18.351 "is_configured": true, 00:41:18.351 "data_offset": 2048, 00:41:18.351 "data_size": 63488 00:41:18.351 } 00:41:18.351 ] 00:41:18.351 }' 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:18.351 [2024-12-09 05:32:05.251634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:18.351 118.00 IOPS, 354.00 MiB/s [2024-12-09T05:32:05.323Z] 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:18.351 05:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:18.949 [2024-12-09 05:32:05.588715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:41:18.949 [2024-12-09 05:32:05.818075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:41:19.517 105.33 IOPS, 316.00 MiB/s [2024-12-09T05:32:06.489Z] 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:19.517 "name": "raid_bdev1", 00:41:19.517 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:19.517 "strip_size_kb": 0, 00:41:19.517 "state": "online", 00:41:19.517 "raid_level": "raid1", 00:41:19.517 "superblock": true, 00:41:19.517 "num_base_bdevs": 2, 00:41:19.517 "num_base_bdevs_discovered": 2, 00:41:19.517 "num_base_bdevs_operational": 2, 00:41:19.517 "process": { 00:41:19.517 "type": "rebuild", 00:41:19.517 "target": "spare", 00:41:19.517 "progress": { 00:41:19.517 "blocks": 40960, 00:41:19.517 "percent": 64 00:41:19.517 } 00:41:19.517 }, 00:41:19.517 "base_bdevs_list": [ 00:41:19.517 { 00:41:19.517 "name": "spare", 00:41:19.517 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:19.517 "is_configured": true, 00:41:19.517 "data_offset": 2048, 00:41:19.517 "data_size": 63488 00:41:19.517 }, 00:41:19.517 { 00:41:19.517 "name": "BaseBdev2", 00:41:19.517 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:19.517 "is_configured": true, 00:41:19.517 "data_offset": 2048, 00:41:19.517 "data_size": 63488 00:41:19.517 } 00:41:19.517 ] 00:41:19.517 }' 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:19.517 05:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:20.112 [2024-12-09 05:32:06.922753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:41:20.370 [2024-12-09 05:32:07.144404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:41:20.628 97.00 IOPS, 291.00 MiB/s [2024-12-09T05:32:07.600Z] 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:20.628 [2024-12-09 05:32:07.486284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:20.628 "name": "raid_bdev1", 00:41:20.628 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:20.628 "strip_size_kb": 0, 00:41:20.628 "state": "online", 00:41:20.628 "raid_level": "raid1", 00:41:20.628 "superblock": true, 00:41:20.628 "num_base_bdevs": 2, 00:41:20.628 "num_base_bdevs_discovered": 2, 00:41:20.628 "num_base_bdevs_operational": 2, 00:41:20.628 "process": { 00:41:20.628 "type": "rebuild", 00:41:20.628 "target": "spare", 00:41:20.628 "progress": { 00:41:20.628 "blocks": 59392, 00:41:20.628 "percent": 93 00:41:20.628 } 00:41:20.628 }, 00:41:20.628 "base_bdevs_list": [ 00:41:20.628 { 00:41:20.628 "name": "spare", 00:41:20.628 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:20.628 "is_configured": true, 00:41:20.628 "data_offset": 2048, 00:41:20.628 "data_size": 63488 00:41:20.628 }, 00:41:20.628 { 00:41:20.628 "name": "BaseBdev2", 00:41:20.628 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:20.628 "is_configured": true, 00:41:20.628 "data_offset": 2048, 00:41:20.628 "data_size": 63488 00:41:20.628 } 00:41:20.628 ] 00:41:20.628 }' 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:20.628 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:20.887 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:20.887 05:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:20.887 [2024-12-09 05:32:07.717565] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:20.887 [2024-12-09 05:32:07.817555] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:20.887 [2024-12-09 05:32:07.828310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:21.710 89.75 IOPS, 269.25 MiB/s [2024-12-09T05:32:08.682Z] 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:21.710 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:21.968 "name": "raid_bdev1", 00:41:21.968 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:21.968 "strip_size_kb": 0, 00:41:21.968 "state": "online", 00:41:21.968 "raid_level": "raid1", 00:41:21.968 "superblock": true, 00:41:21.968 "num_base_bdevs": 2, 00:41:21.968 "num_base_bdevs_discovered": 2, 00:41:21.968 "num_base_bdevs_operational": 2, 00:41:21.968 "base_bdevs_list": [ 00:41:21.968 { 00:41:21.968 "name": "spare", 00:41:21.968 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:21.968 "is_configured": true, 00:41:21.968 "data_offset": 2048, 00:41:21.968 "data_size": 63488 00:41:21.968 }, 00:41:21.968 { 00:41:21.968 "name": "BaseBdev2", 00:41:21.968 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:21.968 "is_configured": true, 00:41:21.968 "data_offset": 2048, 00:41:21.968 "data_size": 63488 00:41:21.968 } 00:41:21.968 ] 00:41:21.968 }' 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:21.968 "name": "raid_bdev1", 00:41:21.968 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:21.968 "strip_size_kb": 0, 00:41:21.968 "state": "online", 00:41:21.968 "raid_level": "raid1", 00:41:21.968 "superblock": true, 00:41:21.968 "num_base_bdevs": 2, 00:41:21.968 "num_base_bdevs_discovered": 2, 00:41:21.968 "num_base_bdevs_operational": 2, 00:41:21.968 "base_bdevs_list": [ 00:41:21.968 { 00:41:21.968 "name": "spare", 00:41:21.968 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:21.968 "is_configured": true, 00:41:21.968 "data_offset": 2048, 00:41:21.968 "data_size": 63488 00:41:21.968 }, 00:41:21.968 { 00:41:21.968 "name": "BaseBdev2", 00:41:21.968 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:21.968 "is_configured": true, 00:41:21.968 "data_offset": 2048, 00:41:21.968 "data_size": 63488 00:41:21.968 } 00:41:21.968 ] 00:41:21.968 }' 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:21.968 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:22.227 05:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.227 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:22.227 "name": "raid_bdev1", 00:41:22.227 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:22.227 "strip_size_kb": 0, 00:41:22.227 "state": "online", 00:41:22.227 "raid_level": "raid1", 00:41:22.227 "superblock": true, 00:41:22.227 "num_base_bdevs": 2, 00:41:22.227 "num_base_bdevs_discovered": 2, 00:41:22.227 "num_base_bdevs_operational": 2, 00:41:22.227 "base_bdevs_list": [ 00:41:22.227 { 00:41:22.227 "name": "spare", 00:41:22.227 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:22.227 "is_configured": true, 00:41:22.227 "data_offset": 2048, 00:41:22.227 "data_size": 63488 00:41:22.227 }, 00:41:22.227 { 00:41:22.227 "name": "BaseBdev2", 00:41:22.227 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:22.227 "is_configured": true, 00:41:22.227 "data_offset": 2048, 00:41:22.227 "data_size": 63488 00:41:22.227 } 00:41:22.227 ] 00:41:22.227 }' 00:41:22.227 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:22.227 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:22.745 86.44 IOPS, 259.33 MiB/s [2024-12-09T05:32:09.717Z] 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:22.745 [2024-12-09 05:32:09.510656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:22.745 [2024-12-09 05:32:09.510730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:22.745 00:41:22.745 Latency(us) 00:41:22.745 [2024-12-09T05:32:09.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.745 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:41:22.745 raid_bdev1 : 9.39 84.12 252.36 0.00 0.00 16257.94 251.35 119156.36 00:41:22.745 [2024-12-09T05:32:09.717Z] =================================================================================================================== 00:41:22.745 [2024-12-09T05:32:09.717Z] Total : 84.12 252.36 0.00 0.00 16257.94 251.35 119156.36 00:41:22.745 [2024-12-09 05:32:09.634214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:22.745 [2024-12-09 05:32:09.634307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:22.745 [2024-12-09 05:32:09.634400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:22.745 [2024-12-09 05:32:09.634415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:22.745 { 00:41:22.745 "results": [ 00:41:22.745 { 00:41:22.745 "job": "raid_bdev1", 00:41:22.745 "core_mask": "0x1", 00:41:22.745 "workload": "randrw", 00:41:22.745 "percentage": 50, 00:41:22.745 "status": "finished", 00:41:22.745 "queue_depth": 2, 00:41:22.745 "io_size": 3145728, 00:41:22.745 "runtime": 9.391504, 00:41:22.745 "iops": 84.11858207162558, 00:41:22.745 "mibps": 252.35574621487675, 00:41:22.745 "io_failed": 0, 00:41:22.745 "io_timeout": 0, 00:41:22.745 "avg_latency_us": 16257.937233601842, 00:41:22.745 "min_latency_us": 251.34545454545454, 00:41:22.745 "max_latency_us": 119156.36363636363 00:41:22.745 } 00:41:22.745 ], 00:41:22.745 "core_count": 1 00:41:22.745 } 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:22.745 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:22.746 05:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:41:23.312 /dev/nbd0 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:23.312 1+0 records in 00:41:23.312 1+0 records out 00:41:23.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334981 s, 12.2 MB/s 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:23.312 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:41:23.571 /dev/nbd1 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:23.571 1+0 records in 00:41:23.571 1+0 records out 00:41:23.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296371 s, 13.8 MB/s 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:23.571 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:23.830 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:24.089 05:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.348 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.348 [2024-12-09 05:32:11.088089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:24.348 [2024-12-09 05:32:11.088184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:24.348 [2024-12-09 05:32:11.088229] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:41:24.348 [2024-12-09 05:32:11.088243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:24.348 [2024-12-09 05:32:11.091035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:24.348 [2024-12-09 05:32:11.091075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:24.348 [2024-12-09 05:32:11.091208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:24.348 [2024-12-09 05:32:11.091264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:24.348 [2024-12-09 05:32:11.091387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:24.349 spare 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 [2024-12-09 05:32:11.191495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:41:24.349 [2024-12-09 05:32:11.191786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:41:24.349 [2024-12-09 05:32:11.192211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:41:24.349 [2024-12-09 05:32:11.192447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:41:24.349 [2024-12-09 05:32:11.192462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:41:24.349 [2024-12-09 05:32:11.192713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:24.349 "name": "raid_bdev1", 00:41:24.349 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:24.349 "strip_size_kb": 0, 00:41:24.349 "state": "online", 00:41:24.349 "raid_level": "raid1", 00:41:24.349 "superblock": true, 00:41:24.349 "num_base_bdevs": 2, 00:41:24.349 "num_base_bdevs_discovered": 2, 00:41:24.349 "num_base_bdevs_operational": 2, 00:41:24.349 "base_bdevs_list": [ 00:41:24.349 { 00:41:24.349 "name": "spare", 00:41:24.349 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:24.349 "is_configured": true, 00:41:24.349 "data_offset": 2048, 00:41:24.349 "data_size": 63488 00:41:24.349 }, 00:41:24.349 { 00:41:24.349 "name": "BaseBdev2", 00:41:24.349 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:24.349 "is_configured": true, 00:41:24.349 "data_offset": 2048, 00:41:24.349 "data_size": 63488 00:41:24.349 } 00:41:24.349 ] 00:41:24.349 }' 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:24.349 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.932 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:24.932 "name": "raid_bdev1", 00:41:24.932 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:24.932 "strip_size_kb": 0, 00:41:24.932 "state": "online", 00:41:24.932 "raid_level": "raid1", 00:41:24.932 "superblock": true, 00:41:24.932 "num_base_bdevs": 2, 00:41:24.932 "num_base_bdevs_discovered": 2, 00:41:24.932 "num_base_bdevs_operational": 2, 00:41:24.932 "base_bdevs_list": [ 00:41:24.932 { 00:41:24.932 "name": "spare", 00:41:24.932 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:24.932 "is_configured": true, 00:41:24.932 "data_offset": 2048, 00:41:24.932 "data_size": 63488 00:41:24.932 }, 00:41:24.932 { 00:41:24.932 "name": "BaseBdev2", 00:41:24.932 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:24.932 "is_configured": true, 00:41:24.932 "data_offset": 2048, 00:41:24.932 "data_size": 63488 00:41:24.932 } 00:41:24.933 ] 00:41:24.933 }' 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:24.933 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:25.192 [2024-12-09 05:32:11.952910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:25.192 05:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.192 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:25.192 "name": "raid_bdev1", 00:41:25.192 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:25.192 "strip_size_kb": 0, 00:41:25.192 "state": "online", 00:41:25.192 "raid_level": "raid1", 00:41:25.192 "superblock": true, 00:41:25.192 "num_base_bdevs": 2, 00:41:25.192 "num_base_bdevs_discovered": 1, 00:41:25.192 "num_base_bdevs_operational": 1, 00:41:25.192 "base_bdevs_list": [ 00:41:25.192 { 00:41:25.192 "name": null, 00:41:25.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:25.192 "is_configured": false, 00:41:25.192 "data_offset": 0, 00:41:25.192 "data_size": 63488 00:41:25.192 }, 00:41:25.192 { 00:41:25.192 "name": "BaseBdev2", 00:41:25.192 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:25.192 "is_configured": true, 00:41:25.192 "data_offset": 2048, 00:41:25.192 "data_size": 63488 00:41:25.192 } 00:41:25.192 ] 00:41:25.192 }' 00:41:25.192 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:25.192 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:25.759 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:25.759 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.759 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:25.759 [2024-12-09 05:32:12.497191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:25.759 [2024-12-09 05:32:12.497553] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:25.759 [2024-12-09 05:32:12.497587] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:25.759 [2024-12-09 05:32:12.497637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:25.759 [2024-12-09 05:32:12.512159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:41:25.759 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.759 05:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:41:25.759 [2024-12-09 05:32:12.514903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:26.695 "name": "raid_bdev1", 00:41:26.695 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:26.695 "strip_size_kb": 0, 00:41:26.695 "state": "online", 00:41:26.695 "raid_level": "raid1", 00:41:26.695 "superblock": true, 00:41:26.695 "num_base_bdevs": 2, 00:41:26.695 "num_base_bdevs_discovered": 2, 00:41:26.695 "num_base_bdevs_operational": 2, 00:41:26.695 "process": { 00:41:26.695 "type": "rebuild", 00:41:26.695 "target": "spare", 00:41:26.695 "progress": { 00:41:26.695 "blocks": 20480, 00:41:26.695 "percent": 32 00:41:26.695 } 00:41:26.695 }, 00:41:26.695 "base_bdevs_list": [ 00:41:26.695 { 00:41:26.695 "name": "spare", 00:41:26.695 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:26.695 "is_configured": true, 00:41:26.695 "data_offset": 2048, 00:41:26.695 "data_size": 63488 00:41:26.695 }, 00:41:26.695 { 00:41:26.695 "name": "BaseBdev2", 00:41:26.695 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:26.695 "is_configured": true, 00:41:26.695 "data_offset": 2048, 00:41:26.695 "data_size": 63488 00:41:26.695 } 00:41:26.695 ] 00:41:26.695 }' 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:26.695 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:26.955 [2024-12-09 05:32:13.679988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:26.955 [2024-12-09 05:32:13.722885] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:26.955 [2024-12-09 05:32:13.723097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:26.955 [2024-12-09 05:32:13.723127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:26.955 [2024-12-09 05:32:13.723151] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:26.955 "name": "raid_bdev1", 00:41:26.955 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:26.955 "strip_size_kb": 0, 00:41:26.955 "state": "online", 00:41:26.955 "raid_level": "raid1", 00:41:26.955 "superblock": true, 00:41:26.955 "num_base_bdevs": 2, 00:41:26.955 "num_base_bdevs_discovered": 1, 00:41:26.955 "num_base_bdevs_operational": 1, 00:41:26.955 "base_bdevs_list": [ 00:41:26.955 { 00:41:26.955 "name": null, 00:41:26.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:26.955 "is_configured": false, 00:41:26.955 "data_offset": 0, 00:41:26.955 "data_size": 63488 00:41:26.955 }, 00:41:26.955 { 00:41:26.955 "name": "BaseBdev2", 00:41:26.955 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:26.955 "is_configured": true, 00:41:26.955 "data_offset": 2048, 00:41:26.955 "data_size": 63488 00:41:26.955 } 00:41:26.955 ] 00:41:26.955 }' 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:26.955 05:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:27.522 05:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:27.522 05:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.522 05:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:27.522 [2024-12-09 05:32:14.299163] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:27.522 [2024-12-09 05:32:14.299403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:27.522 [2024-12-09 05:32:14.299449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:41:27.522 [2024-12-09 05:32:14.299468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:27.522 [2024-12-09 05:32:14.300153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:27.522 [2024-12-09 05:32:14.300195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:27.522 [2024-12-09 05:32:14.300297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:27.522 [2024-12-09 05:32:14.300324] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:27.522 [2024-12-09 05:32:14.300336] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:27.522 [2024-12-09 05:32:14.300371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:27.522 [2024-12-09 05:32:14.312667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:41:27.522 spare 00:41:27.522 05:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.522 05:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:41:27.522 [2024-12-09 05:32:14.315113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:28.460 "name": "raid_bdev1", 00:41:28.460 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:28.460 "strip_size_kb": 0, 00:41:28.460 "state": "online", 00:41:28.460 "raid_level": "raid1", 00:41:28.460 "superblock": true, 00:41:28.460 "num_base_bdevs": 2, 00:41:28.460 "num_base_bdevs_discovered": 2, 00:41:28.460 "num_base_bdevs_operational": 2, 00:41:28.460 "process": { 00:41:28.460 "type": "rebuild", 00:41:28.460 "target": "spare", 00:41:28.460 "progress": { 00:41:28.460 "blocks": 20480, 00:41:28.460 "percent": 32 00:41:28.460 } 00:41:28.460 }, 00:41:28.460 "base_bdevs_list": [ 00:41:28.460 { 00:41:28.460 "name": "spare", 00:41:28.460 "uuid": "5939469a-3b65-5f19-8849-bdda6dc1ca0f", 00:41:28.460 "is_configured": true, 00:41:28.460 "data_offset": 2048, 00:41:28.460 "data_size": 63488 00:41:28.460 }, 00:41:28.460 { 00:41:28.460 "name": "BaseBdev2", 00:41:28.460 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:28.460 "is_configured": true, 00:41:28.460 "data_offset": 2048, 00:41:28.460 "data_size": 63488 00:41:28.460 } 00:41:28.460 ] 00:41:28.460 }' 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:28.460 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:28.722 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:28.722 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:28.723 [2024-12-09 05:32:15.472910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:28.723 [2024-12-09 05:32:15.523114] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:28.723 [2024-12-09 05:32:15.523409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:28.723 [2024-12-09 05:32:15.523442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:28.723 [2024-12-09 05:32:15.523455] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:28.723 "name": "raid_bdev1", 00:41:28.723 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:28.723 "strip_size_kb": 0, 00:41:28.723 "state": "online", 00:41:28.723 "raid_level": "raid1", 00:41:28.723 "superblock": true, 00:41:28.723 "num_base_bdevs": 2, 00:41:28.723 "num_base_bdevs_discovered": 1, 00:41:28.723 "num_base_bdevs_operational": 1, 00:41:28.723 "base_bdevs_list": [ 00:41:28.723 { 00:41:28.723 "name": null, 00:41:28.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:28.723 "is_configured": false, 00:41:28.723 "data_offset": 0, 00:41:28.723 "data_size": 63488 00:41:28.723 }, 00:41:28.723 { 00:41:28.723 "name": "BaseBdev2", 00:41:28.723 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:28.723 "is_configured": true, 00:41:28.723 "data_offset": 2048, 00:41:28.723 "data_size": 63488 00:41:28.723 } 00:41:28.723 ] 00:41:28.723 }' 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:28.723 05:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:29.291 "name": "raid_bdev1", 00:41:29.291 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:29.291 "strip_size_kb": 0, 00:41:29.291 "state": "online", 00:41:29.291 "raid_level": "raid1", 00:41:29.291 "superblock": true, 00:41:29.291 "num_base_bdevs": 2, 00:41:29.291 "num_base_bdevs_discovered": 1, 00:41:29.291 "num_base_bdevs_operational": 1, 00:41:29.291 "base_bdevs_list": [ 00:41:29.291 { 00:41:29.291 "name": null, 00:41:29.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:29.291 "is_configured": false, 00:41:29.291 "data_offset": 0, 00:41:29.291 "data_size": 63488 00:41:29.291 }, 00:41:29.291 { 00:41:29.291 "name": "BaseBdev2", 00:41:29.291 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:29.291 "is_configured": true, 00:41:29.291 "data_offset": 2048, 00:41:29.291 "data_size": 63488 00:41:29.291 } 00:41:29.291 ] 00:41:29.291 }' 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:29.291 [2024-12-09 05:32:16.252815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:29.291 [2024-12-09 05:32:16.252912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:29.291 [2024-12-09 05:32:16.252954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:41:29.291 [2024-12-09 05:32:16.252970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:29.291 [2024-12-09 05:32:16.253600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:29.291 [2024-12-09 05:32:16.253636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:29.291 [2024-12-09 05:32:16.253738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:41:29.291 [2024-12-09 05:32:16.253757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:29.291 [2024-12-09 05:32:16.253819] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:29.291 [2024-12-09 05:32:16.253834] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:41:29.291 BaseBdev1 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.291 05:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:30.666 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.667 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:30.667 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.667 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:30.667 "name": "raid_bdev1", 00:41:30.667 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:30.667 "strip_size_kb": 0, 00:41:30.667 "state": "online", 00:41:30.667 "raid_level": "raid1", 00:41:30.667 "superblock": true, 00:41:30.667 "num_base_bdevs": 2, 00:41:30.667 "num_base_bdevs_discovered": 1, 00:41:30.667 "num_base_bdevs_operational": 1, 00:41:30.667 "base_bdevs_list": [ 00:41:30.667 { 00:41:30.667 "name": null, 00:41:30.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.667 "is_configured": false, 00:41:30.667 "data_offset": 0, 00:41:30.667 "data_size": 63488 00:41:30.667 }, 00:41:30.667 { 00:41:30.667 "name": "BaseBdev2", 00:41:30.667 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:30.667 "is_configured": true, 00:41:30.667 "data_offset": 2048, 00:41:30.667 "data_size": 63488 00:41:30.667 } 00:41:30.667 ] 00:41:30.667 }' 00:41:30.667 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:30.667 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.925 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:30.926 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.926 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:30.926 "name": "raid_bdev1", 00:41:30.926 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:30.926 "strip_size_kb": 0, 00:41:30.926 "state": "online", 00:41:30.926 "raid_level": "raid1", 00:41:30.926 "superblock": true, 00:41:30.926 "num_base_bdevs": 2, 00:41:30.926 "num_base_bdevs_discovered": 1, 00:41:30.926 "num_base_bdevs_operational": 1, 00:41:30.926 "base_bdevs_list": [ 00:41:30.926 { 00:41:30.926 "name": null, 00:41:30.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.926 "is_configured": false, 00:41:30.926 "data_offset": 0, 00:41:30.926 "data_size": 63488 00:41:30.926 }, 00:41:30.926 { 00:41:30.926 "name": "BaseBdev2", 00:41:30.926 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:30.926 "is_configured": true, 00:41:30.926 "data_offset": 2048, 00:41:30.926 "data_size": 63488 00:41:30.926 } 00:41:30.926 ] 00:41:30.926 }' 00:41:30.926 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.183 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:31.183 [2024-12-09 05:32:17.957552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:31.183 [2024-12-09 05:32:17.957868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:31.183 [2024-12-09 05:32:17.957898] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:31.183 request: 00:41:31.183 { 00:41:31.183 "base_bdev": "BaseBdev1", 00:41:31.183 "raid_bdev": "raid_bdev1", 00:41:31.183 "method": "bdev_raid_add_base_bdev", 00:41:31.183 "req_id": 1 00:41:31.183 } 00:41:31.184 Got JSON-RPC error response 00:41:31.184 response: 00:41:31.184 { 00:41:31.184 "code": -22, 00:41:31.184 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:41:31.184 } 00:41:31.184 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:31.184 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:41:31.184 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:31.184 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:31.184 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:31.184 05:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:32.118 05:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.118 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:32.118 "name": "raid_bdev1", 00:41:32.118 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:32.118 "strip_size_kb": 0, 00:41:32.118 "state": "online", 00:41:32.118 "raid_level": "raid1", 00:41:32.118 "superblock": true, 00:41:32.118 "num_base_bdevs": 2, 00:41:32.118 "num_base_bdevs_discovered": 1, 00:41:32.118 "num_base_bdevs_operational": 1, 00:41:32.118 "base_bdevs_list": [ 00:41:32.118 { 00:41:32.118 "name": null, 00:41:32.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.118 "is_configured": false, 00:41:32.118 "data_offset": 0, 00:41:32.118 "data_size": 63488 00:41:32.118 }, 00:41:32.118 { 00:41:32.118 "name": "BaseBdev2", 00:41:32.118 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:32.118 "is_configured": true, 00:41:32.118 "data_offset": 2048, 00:41:32.118 "data_size": 63488 00:41:32.118 } 00:41:32.118 ] 00:41:32.118 }' 00:41:32.118 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:32.118 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:32.683 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:32.684 "name": "raid_bdev1", 00:41:32.684 "uuid": "1ae3d4d6-f197-48e4-9c4a-507d06e11733", 00:41:32.684 "strip_size_kb": 0, 00:41:32.684 "state": "online", 00:41:32.684 "raid_level": "raid1", 00:41:32.684 "superblock": true, 00:41:32.684 "num_base_bdevs": 2, 00:41:32.684 "num_base_bdevs_discovered": 1, 00:41:32.684 "num_base_bdevs_operational": 1, 00:41:32.684 "base_bdevs_list": [ 00:41:32.684 { 00:41:32.684 "name": null, 00:41:32.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.684 "is_configured": false, 00:41:32.684 "data_offset": 0, 00:41:32.684 "data_size": 63488 00:41:32.684 }, 00:41:32.684 { 00:41:32.684 "name": "BaseBdev2", 00:41:32.684 "uuid": "dece7ccd-9b37-5bd9-a509-1313eb39c66e", 00:41:32.684 "is_configured": true, 00:41:32.684 "data_offset": 2048, 00:41:32.684 "data_size": 63488 00:41:32.684 } 00:41:32.684 ] 00:41:32.684 }' 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:32.684 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77186 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77186 ']' 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77186 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77186 00:41:32.942 killing process with pid 77186 00:41:32.942 Received shutdown signal, test time was about 19.487627 seconds 00:41:32.942 00:41:32.942 Latency(us) 00:41:32.942 [2024-12-09T05:32:19.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:32.942 [2024-12-09T05:32:19.914Z] =================================================================================================================== 00:41:32.942 [2024-12-09T05:32:19.914Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77186' 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77186 00:41:32.942 [2024-12-09 05:32:19.714004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:32.942 05:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77186 00:41:32.942 [2024-12-09 05:32:19.714201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:32.942 [2024-12-09 05:32:19.714277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:32.942 [2024-12-09 05:32:19.714301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:41:33.200 [2024-12-09 05:32:19.930472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:41:34.573 00:41:34.573 real 0m23.141s 00:41:34.573 user 0m31.054s 00:41:34.573 sys 0m2.133s 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:34.573 ************************************ 00:41:34.573 END TEST raid_rebuild_test_sb_io 00:41:34.573 ************************************ 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:41:34.573 05:32:21 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:41:34.573 05:32:21 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:41:34.573 05:32:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:34.573 05:32:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:34.573 05:32:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:34.573 ************************************ 00:41:34.573 START TEST raid_rebuild_test 00:41:34.573 ************************************ 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:41:34.573 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77905 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77905 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77905 ']' 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:34.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:34.574 05:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:34.574 [2024-12-09 05:32:21.459720] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:34.574 [2024-12-09 05:32:21.460289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:41:34.574 Zero copy mechanism will not be used. 00:41:34.574 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77905 ] 00:41:34.833 [2024-12-09 05:32:21.653642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:35.091 [2024-12-09 05:32:21.811984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.349 [2024-12-09 05:32:22.085681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:35.349 [2024-12-09 05:32:22.085734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.607 BaseBdev1_malloc 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.607 [2024-12-09 05:32:22.550657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:35.607 [2024-12-09 05:32:22.550758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:35.607 [2024-12-09 05:32:22.550850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:35.607 [2024-12-09 05:32:22.550872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:35.607 [2024-12-09 05:32:22.553800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:35.607 [2024-12-09 05:32:22.553852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:35.607 BaseBdev1 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.607 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.865 BaseBdev2_malloc 00:41:35.865 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.865 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:35.865 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.865 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.865 [2024-12-09 05:32:22.600730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:35.865 [2024-12-09 05:32:22.600839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:35.865 [2024-12-09 05:32:22.600875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:35.865 [2024-12-09 05:32:22.600895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:35.865 [2024-12-09 05:32:22.603863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:35.865 [2024-12-09 05:32:22.603920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:35.865 BaseBdev2 00:41:35.865 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.865 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 BaseBdev3_malloc 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 [2024-12-09 05:32:22.668106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:35.866 [2024-12-09 05:32:22.668203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:35.866 [2024-12-09 05:32:22.668238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:35.866 [2024-12-09 05:32:22.668258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:35.866 [2024-12-09 05:32:22.672550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:35.866 [2024-12-09 05:32:22.672646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:35.866 BaseBdev3 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 BaseBdev4_malloc 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 [2024-12-09 05:32:22.727878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:41:35.866 [2024-12-09 05:32:22.727981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:35.866 [2024-12-09 05:32:22.728015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:35.866 [2024-12-09 05:32:22.728035] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:35.866 [2024-12-09 05:32:22.731268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:35.866 [2024-12-09 05:32:22.731586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:41:35.866 BaseBdev4 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 spare_malloc 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 spare_delay 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 [2024-12-09 05:32:22.788476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:35.866 [2024-12-09 05:32:22.788570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:35.866 [2024-12-09 05:32:22.788599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:35.866 [2024-12-09 05:32:22.788618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:35.866 [2024-12-09 05:32:22.791807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:35.866 [2024-12-09 05:32:22.791874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:35.866 spare 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 [2024-12-09 05:32:22.796615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:35.866 [2024-12-09 05:32:22.799435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:35.866 [2024-12-09 05:32:22.799552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:35.866 [2024-12-09 05:32:22.799661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:35.866 [2024-12-09 05:32:22.799801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:35.866 [2024-12-09 05:32:22.799858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:41:35.866 [2024-12-09 05:32:22.800263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:35.866 [2024-12-09 05:32:22.800526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:35.866 [2024-12-09 05:32:22.800547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:35.866 [2024-12-09 05:32:22.800863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:35.866 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.125 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:36.125 "name": "raid_bdev1", 00:41:36.125 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:36.125 "strip_size_kb": 0, 00:41:36.125 "state": "online", 00:41:36.125 "raid_level": "raid1", 00:41:36.125 "superblock": false, 00:41:36.125 "num_base_bdevs": 4, 00:41:36.125 "num_base_bdevs_discovered": 4, 00:41:36.125 "num_base_bdevs_operational": 4, 00:41:36.125 "base_bdevs_list": [ 00:41:36.125 { 00:41:36.125 "name": "BaseBdev1", 00:41:36.125 "uuid": "3f635cb3-1c3e-51c9-a67d-98bd5da9b9fa", 00:41:36.125 "is_configured": true, 00:41:36.125 "data_offset": 0, 00:41:36.125 "data_size": 65536 00:41:36.125 }, 00:41:36.125 { 00:41:36.125 "name": "BaseBdev2", 00:41:36.125 "uuid": "46b54c0d-cbc2-54a1-84f0-89ff2aa48785", 00:41:36.125 "is_configured": true, 00:41:36.125 "data_offset": 0, 00:41:36.125 "data_size": 65536 00:41:36.125 }, 00:41:36.125 { 00:41:36.125 "name": "BaseBdev3", 00:41:36.125 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:36.125 "is_configured": true, 00:41:36.125 "data_offset": 0, 00:41:36.125 "data_size": 65536 00:41:36.125 }, 00:41:36.125 { 00:41:36.125 "name": "BaseBdev4", 00:41:36.125 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:36.125 "is_configured": true, 00:41:36.125 "data_offset": 0, 00:41:36.125 "data_size": 65536 00:41:36.125 } 00:41:36.125 ] 00:41:36.125 }' 00:41:36.125 05:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:36.125 05:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:36.384 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:36.384 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.384 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:36.384 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:36.384 [2024-12-09 05:32:23.345395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:41:36.642 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:36.643 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:36.901 [2024-12-09 05:32:23.725153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:36.901 /dev/nbd0 00:41:36.901 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:36.901 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:36.901 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:36.901 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:36.901 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:36.902 1+0 records in 00:41:36.902 1+0 records out 00:41:36.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327274 s, 12.5 MB/s 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:41:36.902 05:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:41:46.871 65536+0 records in 00:41:46.871 65536+0 records out 00:41:46.871 33554432 bytes (34 MB, 32 MiB) copied, 8.3865 s, 4.0 MB/s 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:46.871 [2024-12-09 05:32:32.437271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:46.871 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.872 [2024-12-09 05:32:32.469307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:46.872 "name": "raid_bdev1", 00:41:46.872 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:46.872 "strip_size_kb": 0, 00:41:46.872 "state": "online", 00:41:46.872 "raid_level": "raid1", 00:41:46.872 "superblock": false, 00:41:46.872 "num_base_bdevs": 4, 00:41:46.872 "num_base_bdevs_discovered": 3, 00:41:46.872 "num_base_bdevs_operational": 3, 00:41:46.872 "base_bdevs_list": [ 00:41:46.872 { 00:41:46.872 "name": null, 00:41:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.872 "is_configured": false, 00:41:46.872 "data_offset": 0, 00:41:46.872 "data_size": 65536 00:41:46.872 }, 00:41:46.872 { 00:41:46.872 "name": "BaseBdev2", 00:41:46.872 "uuid": "46b54c0d-cbc2-54a1-84f0-89ff2aa48785", 00:41:46.872 "is_configured": true, 00:41:46.872 "data_offset": 0, 00:41:46.872 "data_size": 65536 00:41:46.872 }, 00:41:46.872 { 00:41:46.872 "name": "BaseBdev3", 00:41:46.872 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:46.872 "is_configured": true, 00:41:46.872 "data_offset": 0, 00:41:46.872 "data_size": 65536 00:41:46.872 }, 00:41:46.872 { 00:41:46.872 "name": "BaseBdev4", 00:41:46.872 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:46.872 "is_configured": true, 00:41:46.872 "data_offset": 0, 00:41:46.872 "data_size": 65536 00:41:46.872 } 00:41:46.872 ] 00:41:46.872 }' 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.872 [2024-12-09 05:32:32.961691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:46.872 [2024-12-09 05:32:32.977556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.872 05:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:41:46.872 [2024-12-09 05:32:32.980620] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.131 05:32:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.131 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.131 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:47.131 "name": "raid_bdev1", 00:41:47.131 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:47.131 "strip_size_kb": 0, 00:41:47.131 "state": "online", 00:41:47.131 "raid_level": "raid1", 00:41:47.131 "superblock": false, 00:41:47.131 "num_base_bdevs": 4, 00:41:47.131 "num_base_bdevs_discovered": 4, 00:41:47.131 "num_base_bdevs_operational": 4, 00:41:47.131 "process": { 00:41:47.131 "type": "rebuild", 00:41:47.131 "target": "spare", 00:41:47.131 "progress": { 00:41:47.131 "blocks": 20480, 00:41:47.131 "percent": 31 00:41:47.131 } 00:41:47.131 }, 00:41:47.131 "base_bdevs_list": [ 00:41:47.131 { 00:41:47.131 "name": "spare", 00:41:47.131 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:47.131 "is_configured": true, 00:41:47.131 "data_offset": 0, 00:41:47.131 "data_size": 65536 00:41:47.131 }, 00:41:47.131 { 00:41:47.131 "name": "BaseBdev2", 00:41:47.131 "uuid": "46b54c0d-cbc2-54a1-84f0-89ff2aa48785", 00:41:47.131 "is_configured": true, 00:41:47.131 "data_offset": 0, 00:41:47.131 "data_size": 65536 00:41:47.131 }, 00:41:47.131 { 00:41:47.131 "name": "BaseBdev3", 00:41:47.131 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:47.131 "is_configured": true, 00:41:47.131 "data_offset": 0, 00:41:47.131 "data_size": 65536 00:41:47.131 }, 00:41:47.131 { 00:41:47.131 "name": "BaseBdev4", 00:41:47.131 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:47.131 "is_configured": true, 00:41:47.131 "data_offset": 0, 00:41:47.131 "data_size": 65536 00:41:47.131 } 00:41:47.131 ] 00:41:47.131 }' 00:41:47.131 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:47.131 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:47.131 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.390 [2024-12-09 05:32:34.154587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:47.390 [2024-12-09 05:32:34.190576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:47.390 [2024-12-09 05:32:34.190665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:47.390 [2024-12-09 05:32:34.190694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:47.390 [2024-12-09 05:32:34.190711] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.390 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:47.390 "name": "raid_bdev1", 00:41:47.390 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:47.390 "strip_size_kb": 0, 00:41:47.390 "state": "online", 00:41:47.390 "raid_level": "raid1", 00:41:47.390 "superblock": false, 00:41:47.390 "num_base_bdevs": 4, 00:41:47.390 "num_base_bdevs_discovered": 3, 00:41:47.390 "num_base_bdevs_operational": 3, 00:41:47.390 "base_bdevs_list": [ 00:41:47.390 { 00:41:47.390 "name": null, 00:41:47.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:47.390 "is_configured": false, 00:41:47.390 "data_offset": 0, 00:41:47.390 "data_size": 65536 00:41:47.390 }, 00:41:47.390 { 00:41:47.390 "name": "BaseBdev2", 00:41:47.390 "uuid": "46b54c0d-cbc2-54a1-84f0-89ff2aa48785", 00:41:47.390 "is_configured": true, 00:41:47.390 "data_offset": 0, 00:41:47.390 "data_size": 65536 00:41:47.390 }, 00:41:47.390 { 00:41:47.390 "name": "BaseBdev3", 00:41:47.390 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:47.391 "is_configured": true, 00:41:47.391 "data_offset": 0, 00:41:47.391 "data_size": 65536 00:41:47.391 }, 00:41:47.391 { 00:41:47.391 "name": "BaseBdev4", 00:41:47.391 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:47.391 "is_configured": true, 00:41:47.391 "data_offset": 0, 00:41:47.391 "data_size": 65536 00:41:47.391 } 00:41:47.391 ] 00:41:47.391 }' 00:41:47.391 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:47.391 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.958 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:47.958 "name": "raid_bdev1", 00:41:47.958 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:47.958 "strip_size_kb": 0, 00:41:47.958 "state": "online", 00:41:47.958 "raid_level": "raid1", 00:41:47.958 "superblock": false, 00:41:47.958 "num_base_bdevs": 4, 00:41:47.959 "num_base_bdevs_discovered": 3, 00:41:47.959 "num_base_bdevs_operational": 3, 00:41:47.959 "base_bdevs_list": [ 00:41:47.959 { 00:41:47.959 "name": null, 00:41:47.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:47.959 "is_configured": false, 00:41:47.959 "data_offset": 0, 00:41:47.959 "data_size": 65536 00:41:47.959 }, 00:41:47.959 { 00:41:47.959 "name": "BaseBdev2", 00:41:47.959 "uuid": "46b54c0d-cbc2-54a1-84f0-89ff2aa48785", 00:41:47.959 "is_configured": true, 00:41:47.959 "data_offset": 0, 00:41:47.959 "data_size": 65536 00:41:47.959 }, 00:41:47.959 { 00:41:47.959 "name": "BaseBdev3", 00:41:47.959 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:47.959 "is_configured": true, 00:41:47.959 "data_offset": 0, 00:41:47.959 "data_size": 65536 00:41:47.959 }, 00:41:47.959 { 00:41:47.959 "name": "BaseBdev4", 00:41:47.959 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:47.959 "is_configured": true, 00:41:47.959 "data_offset": 0, 00:41:47.959 "data_size": 65536 00:41:47.959 } 00:41:47.959 ] 00:41:47.959 }' 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.959 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.959 [2024-12-09 05:32:34.916532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:48.216 [2024-12-09 05:32:34.930742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:41:48.216 05:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.216 05:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:41:48.216 [2024-12-09 05:32:34.933607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:49.150 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:49.150 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:49.150 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:49.150 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:49.151 "name": "raid_bdev1", 00:41:49.151 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:49.151 "strip_size_kb": 0, 00:41:49.151 "state": "online", 00:41:49.151 "raid_level": "raid1", 00:41:49.151 "superblock": false, 00:41:49.151 "num_base_bdevs": 4, 00:41:49.151 "num_base_bdevs_discovered": 4, 00:41:49.151 "num_base_bdevs_operational": 4, 00:41:49.151 "process": { 00:41:49.151 "type": "rebuild", 00:41:49.151 "target": "spare", 00:41:49.151 "progress": { 00:41:49.151 "blocks": 20480, 00:41:49.151 "percent": 31 00:41:49.151 } 00:41:49.151 }, 00:41:49.151 "base_bdevs_list": [ 00:41:49.151 { 00:41:49.151 "name": "spare", 00:41:49.151 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:49.151 "is_configured": true, 00:41:49.151 "data_offset": 0, 00:41:49.151 "data_size": 65536 00:41:49.151 }, 00:41:49.151 { 00:41:49.151 "name": "BaseBdev2", 00:41:49.151 "uuid": "46b54c0d-cbc2-54a1-84f0-89ff2aa48785", 00:41:49.151 "is_configured": true, 00:41:49.151 "data_offset": 0, 00:41:49.151 "data_size": 65536 00:41:49.151 }, 00:41:49.151 { 00:41:49.151 "name": "BaseBdev3", 00:41:49.151 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:49.151 "is_configured": true, 00:41:49.151 "data_offset": 0, 00:41:49.151 "data_size": 65536 00:41:49.151 }, 00:41:49.151 { 00:41:49.151 "name": "BaseBdev4", 00:41:49.151 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:49.151 "is_configured": true, 00:41:49.151 "data_offset": 0, 00:41:49.151 "data_size": 65536 00:41:49.151 } 00:41:49.151 ] 00:41:49.151 }' 00:41:49.151 05:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.151 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:49.151 [2024-12-09 05:32:36.095563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:49.409 [2024-12-09 05:32:36.143474] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.409 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:49.409 "name": "raid_bdev1", 00:41:49.409 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:49.409 "strip_size_kb": 0, 00:41:49.409 "state": "online", 00:41:49.409 "raid_level": "raid1", 00:41:49.410 "superblock": false, 00:41:49.410 "num_base_bdevs": 4, 00:41:49.410 "num_base_bdevs_discovered": 3, 00:41:49.410 "num_base_bdevs_operational": 3, 00:41:49.410 "process": { 00:41:49.410 "type": "rebuild", 00:41:49.410 "target": "spare", 00:41:49.410 "progress": { 00:41:49.410 "blocks": 24576, 00:41:49.410 "percent": 37 00:41:49.410 } 00:41:49.410 }, 00:41:49.410 "base_bdevs_list": [ 00:41:49.410 { 00:41:49.410 "name": "spare", 00:41:49.410 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:49.410 "is_configured": true, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 }, 00:41:49.410 { 00:41:49.410 "name": null, 00:41:49.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.410 "is_configured": false, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 }, 00:41:49.410 { 00:41:49.410 "name": "BaseBdev3", 00:41:49.410 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:49.410 "is_configured": true, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 }, 00:41:49.410 { 00:41:49.410 "name": "BaseBdev4", 00:41:49.410 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:49.410 "is_configured": true, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 } 00:41:49.410 ] 00:41:49.410 }' 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=492 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:49.410 "name": "raid_bdev1", 00:41:49.410 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:49.410 "strip_size_kb": 0, 00:41:49.410 "state": "online", 00:41:49.410 "raid_level": "raid1", 00:41:49.410 "superblock": false, 00:41:49.410 "num_base_bdevs": 4, 00:41:49.410 "num_base_bdevs_discovered": 3, 00:41:49.410 "num_base_bdevs_operational": 3, 00:41:49.410 "process": { 00:41:49.410 "type": "rebuild", 00:41:49.410 "target": "spare", 00:41:49.410 "progress": { 00:41:49.410 "blocks": 26624, 00:41:49.410 "percent": 40 00:41:49.410 } 00:41:49.410 }, 00:41:49.410 "base_bdevs_list": [ 00:41:49.410 { 00:41:49.410 "name": "spare", 00:41:49.410 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:49.410 "is_configured": true, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 }, 00:41:49.410 { 00:41:49.410 "name": null, 00:41:49.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.410 "is_configured": false, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 }, 00:41:49.410 { 00:41:49.410 "name": "BaseBdev3", 00:41:49.410 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:49.410 "is_configured": true, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 }, 00:41:49.410 { 00:41:49.410 "name": "BaseBdev4", 00:41:49.410 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:49.410 "is_configured": true, 00:41:49.410 "data_offset": 0, 00:41:49.410 "data_size": 65536 00:41:49.410 } 00:41:49.410 ] 00:41:49.410 }' 00:41:49.410 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:49.668 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:49.668 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:49.668 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:49.668 05:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:50.642 "name": "raid_bdev1", 00:41:50.642 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:50.642 "strip_size_kb": 0, 00:41:50.642 "state": "online", 00:41:50.642 "raid_level": "raid1", 00:41:50.642 "superblock": false, 00:41:50.642 "num_base_bdevs": 4, 00:41:50.642 "num_base_bdevs_discovered": 3, 00:41:50.642 "num_base_bdevs_operational": 3, 00:41:50.642 "process": { 00:41:50.642 "type": "rebuild", 00:41:50.642 "target": "spare", 00:41:50.642 "progress": { 00:41:50.642 "blocks": 51200, 00:41:50.642 "percent": 78 00:41:50.642 } 00:41:50.642 }, 00:41:50.642 "base_bdevs_list": [ 00:41:50.642 { 00:41:50.642 "name": "spare", 00:41:50.642 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:50.642 "is_configured": true, 00:41:50.642 "data_offset": 0, 00:41:50.642 "data_size": 65536 00:41:50.642 }, 00:41:50.642 { 00:41:50.642 "name": null, 00:41:50.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:50.642 "is_configured": false, 00:41:50.642 "data_offset": 0, 00:41:50.642 "data_size": 65536 00:41:50.642 }, 00:41:50.642 { 00:41:50.642 "name": "BaseBdev3", 00:41:50.642 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:50.642 "is_configured": true, 00:41:50.642 "data_offset": 0, 00:41:50.642 "data_size": 65536 00:41:50.642 }, 00:41:50.642 { 00:41:50.642 "name": "BaseBdev4", 00:41:50.642 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:50.642 "is_configured": true, 00:41:50.642 "data_offset": 0, 00:41:50.642 "data_size": 65536 00:41:50.642 } 00:41:50.642 ] 00:41:50.642 }' 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:50.642 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:50.900 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:50.900 05:32:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:51.466 [2024-12-09 05:32:38.159107] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:51.466 [2024-12-09 05:32:38.159412] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:51.466 [2024-12-09 05:32:38.159504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:51.724 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:52.042 "name": "raid_bdev1", 00:41:52.042 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:52.042 "strip_size_kb": 0, 00:41:52.042 "state": "online", 00:41:52.042 "raid_level": "raid1", 00:41:52.042 "superblock": false, 00:41:52.042 "num_base_bdevs": 4, 00:41:52.042 "num_base_bdevs_discovered": 3, 00:41:52.042 "num_base_bdevs_operational": 3, 00:41:52.042 "base_bdevs_list": [ 00:41:52.042 { 00:41:52.042 "name": "spare", 00:41:52.042 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:52.042 "is_configured": true, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 }, 00:41:52.042 { 00:41:52.042 "name": null, 00:41:52.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:52.042 "is_configured": false, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 }, 00:41:52.042 { 00:41:52.042 "name": "BaseBdev3", 00:41:52.042 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:52.042 "is_configured": true, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 }, 00:41:52.042 { 00:41:52.042 "name": "BaseBdev4", 00:41:52.042 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:52.042 "is_configured": true, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 } 00:41:52.042 ] 00:41:52.042 }' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:52.042 "name": "raid_bdev1", 00:41:52.042 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:52.042 "strip_size_kb": 0, 00:41:52.042 "state": "online", 00:41:52.042 "raid_level": "raid1", 00:41:52.042 "superblock": false, 00:41:52.042 "num_base_bdevs": 4, 00:41:52.042 "num_base_bdevs_discovered": 3, 00:41:52.042 "num_base_bdevs_operational": 3, 00:41:52.042 "base_bdevs_list": [ 00:41:52.042 { 00:41:52.042 "name": "spare", 00:41:52.042 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:52.042 "is_configured": true, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 }, 00:41:52.042 { 00:41:52.042 "name": null, 00:41:52.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:52.042 "is_configured": false, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 }, 00:41:52.042 { 00:41:52.042 "name": "BaseBdev3", 00:41:52.042 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:52.042 "is_configured": true, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 }, 00:41:52.042 { 00:41:52.042 "name": "BaseBdev4", 00:41:52.042 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:52.042 "is_configured": true, 00:41:52.042 "data_offset": 0, 00:41:52.042 "data_size": 65536 00:41:52.042 } 00:41:52.042 ] 00:41:52.042 }' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:52.042 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:52.043 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:52.043 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:52.043 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:52.320 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.320 05:32:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:52.320 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.320 05:32:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.320 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.320 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:52.320 "name": "raid_bdev1", 00:41:52.320 "uuid": "8fb8cb64-7354-48bb-ac79-677ac9fe2f31", 00:41:52.320 "strip_size_kb": 0, 00:41:52.320 "state": "online", 00:41:52.320 "raid_level": "raid1", 00:41:52.320 "superblock": false, 00:41:52.320 "num_base_bdevs": 4, 00:41:52.320 "num_base_bdevs_discovered": 3, 00:41:52.320 "num_base_bdevs_operational": 3, 00:41:52.320 "base_bdevs_list": [ 00:41:52.320 { 00:41:52.320 "name": "spare", 00:41:52.320 "uuid": "6922e2a8-4a7a-50bf-b688-a2970de55ef1", 00:41:52.320 "is_configured": true, 00:41:52.320 "data_offset": 0, 00:41:52.320 "data_size": 65536 00:41:52.320 }, 00:41:52.320 { 00:41:52.320 "name": null, 00:41:52.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:52.320 "is_configured": false, 00:41:52.320 "data_offset": 0, 00:41:52.320 "data_size": 65536 00:41:52.320 }, 00:41:52.320 { 00:41:52.320 "name": "BaseBdev3", 00:41:52.320 "uuid": "d1366e2b-9ea5-5a03-9ea9-fb904b31b977", 00:41:52.320 "is_configured": true, 00:41:52.320 "data_offset": 0, 00:41:52.320 "data_size": 65536 00:41:52.320 }, 00:41:52.320 { 00:41:52.320 "name": "BaseBdev4", 00:41:52.320 "uuid": "6556bd26-916b-5467-aed3-5719b0566ec7", 00:41:52.320 "is_configured": true, 00:41:52.320 "data_offset": 0, 00:41:52.320 "data_size": 65536 00:41:52.320 } 00:41:52.320 ] 00:41:52.320 }' 00:41:52.320 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:52.320 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.579 [2024-12-09 05:32:39.499910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:52.579 [2024-12-09 05:32:39.499965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:52.579 [2024-12-09 05:32:39.500081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:52.579 [2024-12-09 05:32:39.500228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:52.579 [2024-12-09 05:32:39.500245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.579 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.838 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:41:52.838 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:41:52.838 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:41:52.838 05:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:52.838 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:52.839 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:53.097 /dev/nbd0 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:53.097 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:53.098 1+0 records in 00:41:53.098 1+0 records out 00:41:53.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275867 s, 14.8 MB/s 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:53.098 05:32:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:41:53.356 /dev/nbd1 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:53.357 1+0 records in 00:41:53.357 1+0 records out 00:41:53.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361138 s, 11.3 MB/s 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:53.357 05:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:53.615 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:53.872 05:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77905 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77905 ']' 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77905 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77905 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:54.131 killing process with pid 77905 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77905' 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77905 00:41:54.131 Received shutdown signal, test time was about 60.000000 seconds 00:41:54.131 00:41:54.131 Latency(us) 00:41:54.131 [2024-12-09T05:32:41.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:54.131 [2024-12-09T05:32:41.103Z] =================================================================================================================== 00:41:54.131 [2024-12-09T05:32:41.103Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:54.131 [2024-12-09 05:32:41.068207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:54.131 05:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77905 00:41:54.698 [2024-12-09 05:32:41.503387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:55.635 05:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:41:55.635 00:41:55.635 real 0m21.251s 00:41:55.635 user 0m23.494s 00:41:55.635 sys 0m3.886s 00:41:55.635 05:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.635 05:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:55.635 ************************************ 00:41:55.635 END TEST raid_rebuild_test 00:41:55.635 ************************************ 00:41:55.895 05:32:42 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:41:55.895 05:32:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:55.895 05:32:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.895 05:32:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:55.895 ************************************ 00:41:55.895 START TEST raid_rebuild_test_sb 00:41:55.895 ************************************ 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78387 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78387 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78387 ']' 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:55.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:55.895 05:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:55.895 [2024-12-09 05:32:42.795888] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:41:55.895 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:55.895 Zero copy mechanism will not be used. 00:41:55.895 [2024-12-09 05:32:42.796081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78387 ] 00:41:56.154 [2024-12-09 05:32:42.973865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.154 [2024-12-09 05:32:43.100705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.414 [2024-12-09 05:32:43.286689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:56.414 [2024-12-09 05:32:43.286799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.982 BaseBdev1_malloc 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.982 [2024-12-09 05:32:43.771058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:56.982 [2024-12-09 05:32:43.771231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:56.982 [2024-12-09 05:32:43.771263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:56.982 [2024-12-09 05:32:43.771282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:56.982 [2024-12-09 05:32:43.774139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:56.982 [2024-12-09 05:32:43.774217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:56.982 BaseBdev1 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.982 BaseBdev2_malloc 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.982 [2024-12-09 05:32:43.821096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:56.982 [2024-12-09 05:32:43.821221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:56.982 [2024-12-09 05:32:43.821253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:56.982 [2024-12-09 05:32:43.821270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:56.982 [2024-12-09 05:32:43.824085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:56.982 [2024-12-09 05:32:43.824157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:56.982 BaseBdev2 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.982 BaseBdev3_malloc 00:41:56.982 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.983 [2024-12-09 05:32:43.883966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:56.983 [2024-12-09 05:32:43.884059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:56.983 [2024-12-09 05:32:43.884130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:56.983 [2024-12-09 05:32:43.884150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:56.983 [2024-12-09 05:32:43.887143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:56.983 [2024-12-09 05:32:43.887207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:56.983 BaseBdev3 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.983 BaseBdev4_malloc 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:56.983 [2024-12-09 05:32:43.936352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:41:56.983 [2024-12-09 05:32:43.936458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:56.983 [2024-12-09 05:32:43.936489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:56.983 [2024-12-09 05:32:43.936522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:56.983 [2024-12-09 05:32:43.939449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:56.983 [2024-12-09 05:32:43.939518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:41:56.983 BaseBdev4 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.983 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.242 spare_malloc 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.242 spare_delay 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.242 05:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.242 [2024-12-09 05:32:43.999287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:57.242 [2024-12-09 05:32:43.999379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:57.242 [2024-12-09 05:32:43.999407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:57.242 [2024-12-09 05:32:43.999423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:57.242 [2024-12-09 05:32:44.002363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:57.242 [2024-12-09 05:32:44.002425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:57.242 spare 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.242 [2024-12-09 05:32:44.011380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:57.242 [2024-12-09 05:32:44.014080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:57.242 [2024-12-09 05:32:44.014178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:57.242 [2024-12-09 05:32:44.014259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:57.242 [2024-12-09 05:32:44.014596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:57.242 [2024-12-09 05:32:44.014632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:41:57.242 [2024-12-09 05:32:44.014986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:57.242 [2024-12-09 05:32:44.015291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:57.242 [2024-12-09 05:32:44.015318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:57.242 [2024-12-09 05:32:44.015554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:57.242 "name": "raid_bdev1", 00:41:57.242 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:41:57.242 "strip_size_kb": 0, 00:41:57.242 "state": "online", 00:41:57.242 "raid_level": "raid1", 00:41:57.242 "superblock": true, 00:41:57.242 "num_base_bdevs": 4, 00:41:57.242 "num_base_bdevs_discovered": 4, 00:41:57.242 "num_base_bdevs_operational": 4, 00:41:57.242 "base_bdevs_list": [ 00:41:57.242 { 00:41:57.242 "name": "BaseBdev1", 00:41:57.242 "uuid": "fef34c6e-dda0-56ec-ae38-c6898277d366", 00:41:57.242 "is_configured": true, 00:41:57.242 "data_offset": 2048, 00:41:57.242 "data_size": 63488 00:41:57.242 }, 00:41:57.242 { 00:41:57.242 "name": "BaseBdev2", 00:41:57.242 "uuid": "e025d362-9e03-5991-9a50-ca57e9d80be1", 00:41:57.242 "is_configured": true, 00:41:57.242 "data_offset": 2048, 00:41:57.242 "data_size": 63488 00:41:57.242 }, 00:41:57.242 { 00:41:57.242 "name": "BaseBdev3", 00:41:57.242 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:41:57.242 "is_configured": true, 00:41:57.242 "data_offset": 2048, 00:41:57.242 "data_size": 63488 00:41:57.242 }, 00:41:57.242 { 00:41:57.242 "name": "BaseBdev4", 00:41:57.242 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:41:57.242 "is_configured": true, 00:41:57.242 "data_offset": 2048, 00:41:57.242 "data_size": 63488 00:41:57.242 } 00:41:57.242 ] 00:41:57.242 }' 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:57.242 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.811 [2024-12-09 05:32:44.564261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:57.811 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:58.070 [2024-12-09 05:32:44.951939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:58.070 /dev/nbd0 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:58.070 05:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:58.070 1+0 records in 00:41:58.070 1+0 records out 00:41:58.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401048 s, 10.2 MB/s 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:41:58.070 05:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:42:06.229 63488+0 records in 00:42:06.229 63488+0 records out 00:42:06.229 32505856 bytes (33 MB, 31 MiB) copied, 7.76676 s, 4.2 MB/s 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:06.229 05:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:06.229 [2024-12-09 05:32:53.046809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:06.229 [2024-12-09 05:32:53.078917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:06.229 "name": "raid_bdev1", 00:42:06.229 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:06.229 "strip_size_kb": 0, 00:42:06.229 "state": "online", 00:42:06.229 "raid_level": "raid1", 00:42:06.229 "superblock": true, 00:42:06.229 "num_base_bdevs": 4, 00:42:06.229 "num_base_bdevs_discovered": 3, 00:42:06.229 "num_base_bdevs_operational": 3, 00:42:06.229 "base_bdevs_list": [ 00:42:06.229 { 00:42:06.229 "name": null, 00:42:06.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.229 "is_configured": false, 00:42:06.229 "data_offset": 0, 00:42:06.229 "data_size": 63488 00:42:06.229 }, 00:42:06.229 { 00:42:06.229 "name": "BaseBdev2", 00:42:06.229 "uuid": "e025d362-9e03-5991-9a50-ca57e9d80be1", 00:42:06.229 "is_configured": true, 00:42:06.229 "data_offset": 2048, 00:42:06.229 "data_size": 63488 00:42:06.229 }, 00:42:06.229 { 00:42:06.229 "name": "BaseBdev3", 00:42:06.229 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:06.229 "is_configured": true, 00:42:06.229 "data_offset": 2048, 00:42:06.229 "data_size": 63488 00:42:06.229 }, 00:42:06.229 { 00:42:06.229 "name": "BaseBdev4", 00:42:06.229 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:06.229 "is_configured": true, 00:42:06.229 "data_offset": 2048, 00:42:06.229 "data_size": 63488 00:42:06.229 } 00:42:06.229 ] 00:42:06.229 }' 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:06.229 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:06.797 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:06.797 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.797 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:06.797 [2024-12-09 05:32:53.583069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:06.797 [2024-12-09 05:32:53.597263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:42:06.797 05:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.797 05:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:06.797 [2024-12-09 05:32:53.600019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:07.736 "name": "raid_bdev1", 00:42:07.736 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:07.736 "strip_size_kb": 0, 00:42:07.736 "state": "online", 00:42:07.736 "raid_level": "raid1", 00:42:07.736 "superblock": true, 00:42:07.736 "num_base_bdevs": 4, 00:42:07.736 "num_base_bdevs_discovered": 4, 00:42:07.736 "num_base_bdevs_operational": 4, 00:42:07.736 "process": { 00:42:07.736 "type": "rebuild", 00:42:07.736 "target": "spare", 00:42:07.736 "progress": { 00:42:07.736 "blocks": 20480, 00:42:07.736 "percent": 32 00:42:07.736 } 00:42:07.736 }, 00:42:07.736 "base_bdevs_list": [ 00:42:07.736 { 00:42:07.736 "name": "spare", 00:42:07.736 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:07.736 "is_configured": true, 00:42:07.736 "data_offset": 2048, 00:42:07.736 "data_size": 63488 00:42:07.736 }, 00:42:07.736 { 00:42:07.736 "name": "BaseBdev2", 00:42:07.736 "uuid": "e025d362-9e03-5991-9a50-ca57e9d80be1", 00:42:07.736 "is_configured": true, 00:42:07.736 "data_offset": 2048, 00:42:07.736 "data_size": 63488 00:42:07.736 }, 00:42:07.736 { 00:42:07.736 "name": "BaseBdev3", 00:42:07.736 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:07.736 "is_configured": true, 00:42:07.736 "data_offset": 2048, 00:42:07.736 "data_size": 63488 00:42:07.736 }, 00:42:07.736 { 00:42:07.736 "name": "BaseBdev4", 00:42:07.736 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:07.736 "is_configured": true, 00:42:07.736 "data_offset": 2048, 00:42:07.736 "data_size": 63488 00:42:07.736 } 00:42:07.736 ] 00:42:07.736 }' 00:42:07.736 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:07.994 [2024-12-09 05:32:54.773152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:07.994 [2024-12-09 05:32:54.809143] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:07.994 [2024-12-09 05:32:54.809248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:07.994 [2024-12-09 05:32:54.809290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:07.994 [2024-12-09 05:32:54.809305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.994 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:07.994 "name": "raid_bdev1", 00:42:07.994 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:07.994 "strip_size_kb": 0, 00:42:07.994 "state": "online", 00:42:07.994 "raid_level": "raid1", 00:42:07.994 "superblock": true, 00:42:07.994 "num_base_bdevs": 4, 00:42:07.994 "num_base_bdevs_discovered": 3, 00:42:07.994 "num_base_bdevs_operational": 3, 00:42:07.994 "base_bdevs_list": [ 00:42:07.994 { 00:42:07.994 "name": null, 00:42:07.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:07.994 "is_configured": false, 00:42:07.994 "data_offset": 0, 00:42:07.994 "data_size": 63488 00:42:07.994 }, 00:42:07.994 { 00:42:07.994 "name": "BaseBdev2", 00:42:07.994 "uuid": "e025d362-9e03-5991-9a50-ca57e9d80be1", 00:42:07.994 "is_configured": true, 00:42:07.994 "data_offset": 2048, 00:42:07.995 "data_size": 63488 00:42:07.995 }, 00:42:07.995 { 00:42:07.995 "name": "BaseBdev3", 00:42:07.995 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:07.995 "is_configured": true, 00:42:07.995 "data_offset": 2048, 00:42:07.995 "data_size": 63488 00:42:07.995 }, 00:42:07.995 { 00:42:07.995 "name": "BaseBdev4", 00:42:07.995 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:07.995 "is_configured": true, 00:42:07.995 "data_offset": 2048, 00:42:07.995 "data_size": 63488 00:42:07.995 } 00:42:07.995 ] 00:42:07.995 }' 00:42:07.995 05:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:07.995 05:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:08.561 "name": "raid_bdev1", 00:42:08.561 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:08.561 "strip_size_kb": 0, 00:42:08.561 "state": "online", 00:42:08.561 "raid_level": "raid1", 00:42:08.561 "superblock": true, 00:42:08.561 "num_base_bdevs": 4, 00:42:08.561 "num_base_bdevs_discovered": 3, 00:42:08.561 "num_base_bdevs_operational": 3, 00:42:08.561 "base_bdevs_list": [ 00:42:08.561 { 00:42:08.561 "name": null, 00:42:08.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:08.561 "is_configured": false, 00:42:08.561 "data_offset": 0, 00:42:08.561 "data_size": 63488 00:42:08.561 }, 00:42:08.561 { 00:42:08.561 "name": "BaseBdev2", 00:42:08.561 "uuid": "e025d362-9e03-5991-9a50-ca57e9d80be1", 00:42:08.561 "is_configured": true, 00:42:08.561 "data_offset": 2048, 00:42:08.561 "data_size": 63488 00:42:08.561 }, 00:42:08.561 { 00:42:08.561 "name": "BaseBdev3", 00:42:08.561 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:08.561 "is_configured": true, 00:42:08.561 "data_offset": 2048, 00:42:08.561 "data_size": 63488 00:42:08.561 }, 00:42:08.561 { 00:42:08.561 "name": "BaseBdev4", 00:42:08.561 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:08.561 "is_configured": true, 00:42:08.561 "data_offset": 2048, 00:42:08.561 "data_size": 63488 00:42:08.561 } 00:42:08.561 ] 00:42:08.561 }' 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:08.561 [2024-12-09 05:32:55.501833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:08.561 [2024-12-09 05:32:55.515676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.561 05:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:08.561 [2024-12-09 05:32:55.518486] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:09.935 "name": "raid_bdev1", 00:42:09.935 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:09.935 "strip_size_kb": 0, 00:42:09.935 "state": "online", 00:42:09.935 "raid_level": "raid1", 00:42:09.935 "superblock": true, 00:42:09.935 "num_base_bdevs": 4, 00:42:09.935 "num_base_bdevs_discovered": 4, 00:42:09.935 "num_base_bdevs_operational": 4, 00:42:09.935 "process": { 00:42:09.935 "type": "rebuild", 00:42:09.935 "target": "spare", 00:42:09.935 "progress": { 00:42:09.935 "blocks": 20480, 00:42:09.935 "percent": 32 00:42:09.935 } 00:42:09.935 }, 00:42:09.935 "base_bdevs_list": [ 00:42:09.935 { 00:42:09.935 "name": "spare", 00:42:09.935 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 }, 00:42:09.935 { 00:42:09.935 "name": "BaseBdev2", 00:42:09.935 "uuid": "e025d362-9e03-5991-9a50-ca57e9d80be1", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 }, 00:42:09.935 { 00:42:09.935 "name": "BaseBdev3", 00:42:09.935 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 }, 00:42:09.935 { 00:42:09.935 "name": "BaseBdev4", 00:42:09.935 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 } 00:42:09.935 ] 00:42:09.935 }' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:42:09.935 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:09.935 [2024-12-09 05:32:56.683541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:09.935 [2024-12-09 05:32:56.827590] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:09.935 "name": "raid_bdev1", 00:42:09.935 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:09.935 "strip_size_kb": 0, 00:42:09.935 "state": "online", 00:42:09.935 "raid_level": "raid1", 00:42:09.935 "superblock": true, 00:42:09.935 "num_base_bdevs": 4, 00:42:09.935 "num_base_bdevs_discovered": 3, 00:42:09.935 "num_base_bdevs_operational": 3, 00:42:09.935 "process": { 00:42:09.935 "type": "rebuild", 00:42:09.935 "target": "spare", 00:42:09.935 "progress": { 00:42:09.935 "blocks": 24576, 00:42:09.935 "percent": 38 00:42:09.935 } 00:42:09.935 }, 00:42:09.935 "base_bdevs_list": [ 00:42:09.935 { 00:42:09.935 "name": "spare", 00:42:09.935 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 }, 00:42:09.935 { 00:42:09.935 "name": null, 00:42:09.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:09.935 "is_configured": false, 00:42:09.935 "data_offset": 0, 00:42:09.935 "data_size": 63488 00:42:09.935 }, 00:42:09.935 { 00:42:09.935 "name": "BaseBdev3", 00:42:09.935 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 }, 00:42:09.935 { 00:42:09.935 "name": "BaseBdev4", 00:42:09.935 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:09.935 "is_configured": true, 00:42:09.935 "data_offset": 2048, 00:42:09.935 "data_size": 63488 00:42:09.935 } 00:42:09.935 ] 00:42:09.935 }' 00:42:09.935 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=512 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:10.193 05:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:10.193 "name": "raid_bdev1", 00:42:10.193 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:10.193 "strip_size_kb": 0, 00:42:10.193 "state": "online", 00:42:10.193 "raid_level": "raid1", 00:42:10.193 "superblock": true, 00:42:10.193 "num_base_bdevs": 4, 00:42:10.193 "num_base_bdevs_discovered": 3, 00:42:10.193 "num_base_bdevs_operational": 3, 00:42:10.193 "process": { 00:42:10.193 "type": "rebuild", 00:42:10.193 "target": "spare", 00:42:10.193 "progress": { 00:42:10.193 "blocks": 26624, 00:42:10.193 "percent": 41 00:42:10.193 } 00:42:10.193 }, 00:42:10.193 "base_bdevs_list": [ 00:42:10.193 { 00:42:10.193 "name": "spare", 00:42:10.193 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:10.193 "is_configured": true, 00:42:10.193 "data_offset": 2048, 00:42:10.193 "data_size": 63488 00:42:10.193 }, 00:42:10.193 { 00:42:10.193 "name": null, 00:42:10.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:10.193 "is_configured": false, 00:42:10.193 "data_offset": 0, 00:42:10.193 "data_size": 63488 00:42:10.193 }, 00:42:10.193 { 00:42:10.193 "name": "BaseBdev3", 00:42:10.193 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:10.193 "is_configured": true, 00:42:10.193 "data_offset": 2048, 00:42:10.193 "data_size": 63488 00:42:10.193 }, 00:42:10.193 { 00:42:10.193 "name": "BaseBdev4", 00:42:10.193 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:10.193 "is_configured": true, 00:42:10.193 "data_offset": 2048, 00:42:10.193 "data_size": 63488 00:42:10.193 } 00:42:10.193 ] 00:42:10.193 }' 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:10.193 05:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.608 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:11.608 "name": "raid_bdev1", 00:42:11.608 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:11.608 "strip_size_kb": 0, 00:42:11.608 "state": "online", 00:42:11.608 "raid_level": "raid1", 00:42:11.608 "superblock": true, 00:42:11.608 "num_base_bdevs": 4, 00:42:11.608 "num_base_bdevs_discovered": 3, 00:42:11.608 "num_base_bdevs_operational": 3, 00:42:11.608 "process": { 00:42:11.608 "type": "rebuild", 00:42:11.609 "target": "spare", 00:42:11.609 "progress": { 00:42:11.609 "blocks": 51200, 00:42:11.609 "percent": 80 00:42:11.609 } 00:42:11.609 }, 00:42:11.609 "base_bdevs_list": [ 00:42:11.609 { 00:42:11.609 "name": "spare", 00:42:11.609 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:11.609 "is_configured": true, 00:42:11.609 "data_offset": 2048, 00:42:11.609 "data_size": 63488 00:42:11.609 }, 00:42:11.609 { 00:42:11.609 "name": null, 00:42:11.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:11.609 "is_configured": false, 00:42:11.609 "data_offset": 0, 00:42:11.609 "data_size": 63488 00:42:11.609 }, 00:42:11.609 { 00:42:11.609 "name": "BaseBdev3", 00:42:11.609 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:11.609 "is_configured": true, 00:42:11.609 "data_offset": 2048, 00:42:11.609 "data_size": 63488 00:42:11.609 }, 00:42:11.609 { 00:42:11.609 "name": "BaseBdev4", 00:42:11.609 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:11.609 "is_configured": true, 00:42:11.609 "data_offset": 2048, 00:42:11.609 "data_size": 63488 00:42:11.609 } 00:42:11.609 ] 00:42:11.609 }' 00:42:11.609 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:11.609 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:11.609 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:11.609 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:11.609 05:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:11.867 [2024-12-09 05:32:58.742375] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:11.867 [2024-12-09 05:32:58.742486] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:11.867 [2024-12-09 05:32:58.742694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:12.433 "name": "raid_bdev1", 00:42:12.433 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:12.433 "strip_size_kb": 0, 00:42:12.433 "state": "online", 00:42:12.433 "raid_level": "raid1", 00:42:12.433 "superblock": true, 00:42:12.433 "num_base_bdevs": 4, 00:42:12.433 "num_base_bdevs_discovered": 3, 00:42:12.433 "num_base_bdevs_operational": 3, 00:42:12.433 "base_bdevs_list": [ 00:42:12.433 { 00:42:12.433 "name": "spare", 00:42:12.433 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:12.433 "is_configured": true, 00:42:12.433 "data_offset": 2048, 00:42:12.433 "data_size": 63488 00:42:12.433 }, 00:42:12.433 { 00:42:12.433 "name": null, 00:42:12.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:12.433 "is_configured": false, 00:42:12.433 "data_offset": 0, 00:42:12.433 "data_size": 63488 00:42:12.433 }, 00:42:12.433 { 00:42:12.433 "name": "BaseBdev3", 00:42:12.433 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:12.433 "is_configured": true, 00:42:12.433 "data_offset": 2048, 00:42:12.433 "data_size": 63488 00:42:12.433 }, 00:42:12.433 { 00:42:12.433 "name": "BaseBdev4", 00:42:12.433 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:12.433 "is_configured": true, 00:42:12.433 "data_offset": 2048, 00:42:12.433 "data_size": 63488 00:42:12.433 } 00:42:12.433 ] 00:42:12.433 }' 00:42:12.433 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:12.691 "name": "raid_bdev1", 00:42:12.691 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:12.691 "strip_size_kb": 0, 00:42:12.691 "state": "online", 00:42:12.691 "raid_level": "raid1", 00:42:12.691 "superblock": true, 00:42:12.691 "num_base_bdevs": 4, 00:42:12.691 "num_base_bdevs_discovered": 3, 00:42:12.691 "num_base_bdevs_operational": 3, 00:42:12.691 "base_bdevs_list": [ 00:42:12.691 { 00:42:12.691 "name": "spare", 00:42:12.691 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:12.691 "is_configured": true, 00:42:12.691 "data_offset": 2048, 00:42:12.691 "data_size": 63488 00:42:12.691 }, 00:42:12.691 { 00:42:12.691 "name": null, 00:42:12.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:12.691 "is_configured": false, 00:42:12.691 "data_offset": 0, 00:42:12.691 "data_size": 63488 00:42:12.691 }, 00:42:12.691 { 00:42:12.691 "name": "BaseBdev3", 00:42:12.691 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:12.691 "is_configured": true, 00:42:12.691 "data_offset": 2048, 00:42:12.691 "data_size": 63488 00:42:12.691 }, 00:42:12.691 { 00:42:12.691 "name": "BaseBdev4", 00:42:12.691 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:12.691 "is_configured": true, 00:42:12.691 "data_offset": 2048, 00:42:12.691 "data_size": 63488 00:42:12.691 } 00:42:12.691 ] 00:42:12.691 }' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.691 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.948 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:12.948 "name": "raid_bdev1", 00:42:12.948 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:12.948 "strip_size_kb": 0, 00:42:12.948 "state": "online", 00:42:12.948 "raid_level": "raid1", 00:42:12.948 "superblock": true, 00:42:12.948 "num_base_bdevs": 4, 00:42:12.948 "num_base_bdevs_discovered": 3, 00:42:12.948 "num_base_bdevs_operational": 3, 00:42:12.948 "base_bdevs_list": [ 00:42:12.948 { 00:42:12.948 "name": "spare", 00:42:12.948 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:12.948 "is_configured": true, 00:42:12.948 "data_offset": 2048, 00:42:12.948 "data_size": 63488 00:42:12.948 }, 00:42:12.948 { 00:42:12.948 "name": null, 00:42:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:12.948 "is_configured": false, 00:42:12.948 "data_offset": 0, 00:42:12.948 "data_size": 63488 00:42:12.948 }, 00:42:12.948 { 00:42:12.948 "name": "BaseBdev3", 00:42:12.948 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:12.948 "is_configured": true, 00:42:12.948 "data_offset": 2048, 00:42:12.949 "data_size": 63488 00:42:12.949 }, 00:42:12.949 { 00:42:12.949 "name": "BaseBdev4", 00:42:12.949 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:12.949 "is_configured": true, 00:42:12.949 "data_offset": 2048, 00:42:12.949 "data_size": 63488 00:42:12.949 } 00:42:12.949 ] 00:42:12.949 }' 00:42:12.949 05:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:12.949 05:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:13.206 [2024-12-09 05:33:00.125423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:13.206 [2024-12-09 05:33:00.125467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:13.206 [2024-12-09 05:33:00.125597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:13.206 [2024-12-09 05:33:00.125704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:13.206 [2024-12-09 05:33:00.125736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:13.206 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:42:13.801 /dev/nbd0 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:13.801 1+0 records in 00:42:13.801 1+0 records out 00:42:13.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290468 s, 14.1 MB/s 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:13.801 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:42:14.061 /dev/nbd1 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:14.061 1+0 records in 00:42:14.061 1+0 records out 00:42:14.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423364 s, 9.7 MB/s 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:14.061 05:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:14.061 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:14.627 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.886 [2024-12-09 05:33:01.661739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:14.886 [2024-12-09 05:33:01.661836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:14.886 [2024-12-09 05:33:01.661872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:42:14.886 [2024-12-09 05:33:01.661888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:14.886 [2024-12-09 05:33:01.665091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:14.886 [2024-12-09 05:33:01.665152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:14.886 [2024-12-09 05:33:01.665274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:14.886 [2024-12-09 05:33:01.665342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:14.886 [2024-12-09 05:33:01.665550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:14.886 [2024-12-09 05:33:01.665698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:14.886 spare 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.886 [2024-12-09 05:33:01.765914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:42:14.886 [2024-12-09 05:33:01.765950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:42:14.886 [2024-12-09 05:33:01.766369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:42:14.886 [2024-12-09 05:33:01.766654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:42:14.886 [2024-12-09 05:33:01.766686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:42:14.886 [2024-12-09 05:33:01.766961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:14.886 "name": "raid_bdev1", 00:42:14.886 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:14.886 "strip_size_kb": 0, 00:42:14.886 "state": "online", 00:42:14.886 "raid_level": "raid1", 00:42:14.886 "superblock": true, 00:42:14.886 "num_base_bdevs": 4, 00:42:14.886 "num_base_bdevs_discovered": 3, 00:42:14.886 "num_base_bdevs_operational": 3, 00:42:14.886 "base_bdevs_list": [ 00:42:14.886 { 00:42:14.886 "name": "spare", 00:42:14.886 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:14.886 "is_configured": true, 00:42:14.886 "data_offset": 2048, 00:42:14.886 "data_size": 63488 00:42:14.886 }, 00:42:14.886 { 00:42:14.886 "name": null, 00:42:14.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:14.886 "is_configured": false, 00:42:14.886 "data_offset": 2048, 00:42:14.886 "data_size": 63488 00:42:14.886 }, 00:42:14.886 { 00:42:14.886 "name": "BaseBdev3", 00:42:14.886 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:14.886 "is_configured": true, 00:42:14.886 "data_offset": 2048, 00:42:14.886 "data_size": 63488 00:42:14.886 }, 00:42:14.886 { 00:42:14.886 "name": "BaseBdev4", 00:42:14.886 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:14.886 "is_configured": true, 00:42:14.886 "data_offset": 2048, 00:42:14.886 "data_size": 63488 00:42:14.886 } 00:42:14.886 ] 00:42:14.886 }' 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:14.886 05:33:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:15.454 "name": "raid_bdev1", 00:42:15.454 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:15.454 "strip_size_kb": 0, 00:42:15.454 "state": "online", 00:42:15.454 "raid_level": "raid1", 00:42:15.454 "superblock": true, 00:42:15.454 "num_base_bdevs": 4, 00:42:15.454 "num_base_bdevs_discovered": 3, 00:42:15.454 "num_base_bdevs_operational": 3, 00:42:15.454 "base_bdevs_list": [ 00:42:15.454 { 00:42:15.454 "name": "spare", 00:42:15.454 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:15.454 "is_configured": true, 00:42:15.454 "data_offset": 2048, 00:42:15.454 "data_size": 63488 00:42:15.454 }, 00:42:15.454 { 00:42:15.454 "name": null, 00:42:15.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:15.454 "is_configured": false, 00:42:15.454 "data_offset": 2048, 00:42:15.454 "data_size": 63488 00:42:15.454 }, 00:42:15.454 { 00:42:15.454 "name": "BaseBdev3", 00:42:15.454 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:15.454 "is_configured": true, 00:42:15.454 "data_offset": 2048, 00:42:15.454 "data_size": 63488 00:42:15.454 }, 00:42:15.454 { 00:42:15.454 "name": "BaseBdev4", 00:42:15.454 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:15.454 "is_configured": true, 00:42:15.454 "data_offset": 2048, 00:42:15.454 "data_size": 63488 00:42:15.454 } 00:42:15.454 ] 00:42:15.454 }' 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:15.454 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:15.714 [2024-12-09 05:33:02.510216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:15.714 "name": "raid_bdev1", 00:42:15.714 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:15.714 "strip_size_kb": 0, 00:42:15.714 "state": "online", 00:42:15.714 "raid_level": "raid1", 00:42:15.714 "superblock": true, 00:42:15.714 "num_base_bdevs": 4, 00:42:15.714 "num_base_bdevs_discovered": 2, 00:42:15.714 "num_base_bdevs_operational": 2, 00:42:15.714 "base_bdevs_list": [ 00:42:15.714 { 00:42:15.714 "name": null, 00:42:15.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:15.714 "is_configured": false, 00:42:15.714 "data_offset": 0, 00:42:15.714 "data_size": 63488 00:42:15.714 }, 00:42:15.714 { 00:42:15.714 "name": null, 00:42:15.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:15.714 "is_configured": false, 00:42:15.714 "data_offset": 2048, 00:42:15.714 "data_size": 63488 00:42:15.714 }, 00:42:15.714 { 00:42:15.714 "name": "BaseBdev3", 00:42:15.714 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:15.714 "is_configured": true, 00:42:15.714 "data_offset": 2048, 00:42:15.714 "data_size": 63488 00:42:15.714 }, 00:42:15.714 { 00:42:15.714 "name": "BaseBdev4", 00:42:15.714 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:15.714 "is_configured": true, 00:42:15.714 "data_offset": 2048, 00:42:15.714 "data_size": 63488 00:42:15.714 } 00:42:15.714 ] 00:42:15.714 }' 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:15.714 05:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.281 05:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:16.281 05:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.281 05:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.281 [2024-12-09 05:33:03.050461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:16.281 [2024-12-09 05:33:03.050804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:42:16.281 [2024-12-09 05:33:03.050830] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:16.281 [2024-12-09 05:33:03.050894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:16.281 [2024-12-09 05:33:03.065640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:42:16.281 05:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.281 05:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:42:16.281 [2024-12-09 05:33:03.068631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.216 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:17.216 "name": "raid_bdev1", 00:42:17.216 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:17.216 "strip_size_kb": 0, 00:42:17.216 "state": "online", 00:42:17.216 "raid_level": "raid1", 00:42:17.216 "superblock": true, 00:42:17.216 "num_base_bdevs": 4, 00:42:17.216 "num_base_bdevs_discovered": 3, 00:42:17.216 "num_base_bdevs_operational": 3, 00:42:17.216 "process": { 00:42:17.216 "type": "rebuild", 00:42:17.216 "target": "spare", 00:42:17.216 "progress": { 00:42:17.216 "blocks": 20480, 00:42:17.216 "percent": 32 00:42:17.216 } 00:42:17.216 }, 00:42:17.216 "base_bdevs_list": [ 00:42:17.216 { 00:42:17.216 "name": "spare", 00:42:17.216 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:17.216 "is_configured": true, 00:42:17.216 "data_offset": 2048, 00:42:17.216 "data_size": 63488 00:42:17.216 }, 00:42:17.216 { 00:42:17.216 "name": null, 00:42:17.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:17.216 "is_configured": false, 00:42:17.216 "data_offset": 2048, 00:42:17.216 "data_size": 63488 00:42:17.216 }, 00:42:17.216 { 00:42:17.217 "name": "BaseBdev3", 00:42:17.217 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:17.217 "is_configured": true, 00:42:17.217 "data_offset": 2048, 00:42:17.217 "data_size": 63488 00:42:17.217 }, 00:42:17.217 { 00:42:17.217 "name": "BaseBdev4", 00:42:17.217 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:17.217 "is_configured": true, 00:42:17.217 "data_offset": 2048, 00:42:17.217 "data_size": 63488 00:42:17.217 } 00:42:17.217 ] 00:42:17.217 }' 00:42:17.217 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:17.217 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:17.217 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:17.476 [2024-12-09 05:33:04.226886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:17.476 [2024-12-09 05:33:04.278854] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:17.476 [2024-12-09 05:33:04.278970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:17.476 [2024-12-09 05:33:04.279002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:17.476 [2024-12-09 05:33:04.279015] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:17.476 "name": "raid_bdev1", 00:42:17.476 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:17.476 "strip_size_kb": 0, 00:42:17.476 "state": "online", 00:42:17.476 "raid_level": "raid1", 00:42:17.476 "superblock": true, 00:42:17.476 "num_base_bdevs": 4, 00:42:17.476 "num_base_bdevs_discovered": 2, 00:42:17.476 "num_base_bdevs_operational": 2, 00:42:17.476 "base_bdevs_list": [ 00:42:17.476 { 00:42:17.476 "name": null, 00:42:17.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:17.476 "is_configured": false, 00:42:17.476 "data_offset": 0, 00:42:17.476 "data_size": 63488 00:42:17.476 }, 00:42:17.476 { 00:42:17.476 "name": null, 00:42:17.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:17.476 "is_configured": false, 00:42:17.476 "data_offset": 2048, 00:42:17.476 "data_size": 63488 00:42:17.476 }, 00:42:17.476 { 00:42:17.476 "name": "BaseBdev3", 00:42:17.476 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:17.476 "is_configured": true, 00:42:17.476 "data_offset": 2048, 00:42:17.476 "data_size": 63488 00:42:17.476 }, 00:42:17.476 { 00:42:17.476 "name": "BaseBdev4", 00:42:17.476 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:17.476 "is_configured": true, 00:42:17.476 "data_offset": 2048, 00:42:17.476 "data_size": 63488 00:42:17.476 } 00:42:17.476 ] 00:42:17.476 }' 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:17.476 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:18.044 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:18.044 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.044 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:18.044 [2024-12-09 05:33:04.783930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:18.044 [2024-12-09 05:33:04.784042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:18.044 [2024-12-09 05:33:04.784095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:42:18.044 [2024-12-09 05:33:04.784112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:18.044 [2024-12-09 05:33:04.784808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:18.044 [2024-12-09 05:33:04.784842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:18.044 [2024-12-09 05:33:04.784986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:18.044 [2024-12-09 05:33:04.785013] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:42:18.044 [2024-12-09 05:33:04.785035] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:18.044 [2024-12-09 05:33:04.785071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:18.044 [2024-12-09 05:33:04.798547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:42:18.044 spare 00:42:18.044 05:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.044 05:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:42:18.044 [2024-12-09 05:33:04.801245] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.986 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:18.986 "name": "raid_bdev1", 00:42:18.986 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:18.986 "strip_size_kb": 0, 00:42:18.986 "state": "online", 00:42:18.986 "raid_level": "raid1", 00:42:18.986 "superblock": true, 00:42:18.986 "num_base_bdevs": 4, 00:42:18.986 "num_base_bdevs_discovered": 3, 00:42:18.986 "num_base_bdevs_operational": 3, 00:42:18.986 "process": { 00:42:18.986 "type": "rebuild", 00:42:18.986 "target": "spare", 00:42:18.986 "progress": { 00:42:18.986 "blocks": 20480, 00:42:18.986 "percent": 32 00:42:18.986 } 00:42:18.986 }, 00:42:18.986 "base_bdevs_list": [ 00:42:18.986 { 00:42:18.986 "name": "spare", 00:42:18.986 "uuid": "7ba2f017-3274-5f10-b3a0-554216cc697c", 00:42:18.986 "is_configured": true, 00:42:18.986 "data_offset": 2048, 00:42:18.986 "data_size": 63488 00:42:18.986 }, 00:42:18.986 { 00:42:18.986 "name": null, 00:42:18.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:18.986 "is_configured": false, 00:42:18.986 "data_offset": 2048, 00:42:18.986 "data_size": 63488 00:42:18.986 }, 00:42:18.986 { 00:42:18.986 "name": "BaseBdev3", 00:42:18.986 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:18.986 "is_configured": true, 00:42:18.986 "data_offset": 2048, 00:42:18.987 "data_size": 63488 00:42:18.987 }, 00:42:18.987 { 00:42:18.987 "name": "BaseBdev4", 00:42:18.987 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:18.987 "is_configured": true, 00:42:18.987 "data_offset": 2048, 00:42:18.987 "data_size": 63488 00:42:18.987 } 00:42:18.987 ] 00:42:18.987 }' 00:42:18.987 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:18.987 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:18.987 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:19.245 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:19.245 05:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:42:19.245 05:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.245 05:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:19.245 [2024-12-09 05:33:05.975403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:19.245 [2024-12-09 05:33:06.011320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:19.245 [2024-12-09 05:33:06.011427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:19.245 [2024-12-09 05:33:06.011456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:19.245 [2024-12-09 05:33:06.011473] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:19.245 "name": "raid_bdev1", 00:42:19.245 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:19.245 "strip_size_kb": 0, 00:42:19.245 "state": "online", 00:42:19.245 "raid_level": "raid1", 00:42:19.245 "superblock": true, 00:42:19.245 "num_base_bdevs": 4, 00:42:19.245 "num_base_bdevs_discovered": 2, 00:42:19.245 "num_base_bdevs_operational": 2, 00:42:19.245 "base_bdevs_list": [ 00:42:19.245 { 00:42:19.245 "name": null, 00:42:19.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:19.245 "is_configured": false, 00:42:19.245 "data_offset": 0, 00:42:19.245 "data_size": 63488 00:42:19.245 }, 00:42:19.245 { 00:42:19.245 "name": null, 00:42:19.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:19.245 "is_configured": false, 00:42:19.245 "data_offset": 2048, 00:42:19.245 "data_size": 63488 00:42:19.245 }, 00:42:19.245 { 00:42:19.245 "name": "BaseBdev3", 00:42:19.245 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:19.245 "is_configured": true, 00:42:19.245 "data_offset": 2048, 00:42:19.245 "data_size": 63488 00:42:19.245 }, 00:42:19.245 { 00:42:19.245 "name": "BaseBdev4", 00:42:19.245 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:19.245 "is_configured": true, 00:42:19.245 "data_offset": 2048, 00:42:19.245 "data_size": 63488 00:42:19.245 } 00:42:19.245 ] 00:42:19.245 }' 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:19.245 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:19.811 "name": "raid_bdev1", 00:42:19.811 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:19.811 "strip_size_kb": 0, 00:42:19.811 "state": "online", 00:42:19.811 "raid_level": "raid1", 00:42:19.811 "superblock": true, 00:42:19.811 "num_base_bdevs": 4, 00:42:19.811 "num_base_bdevs_discovered": 2, 00:42:19.811 "num_base_bdevs_operational": 2, 00:42:19.811 "base_bdevs_list": [ 00:42:19.811 { 00:42:19.811 "name": null, 00:42:19.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:19.811 "is_configured": false, 00:42:19.811 "data_offset": 0, 00:42:19.811 "data_size": 63488 00:42:19.811 }, 00:42:19.811 { 00:42:19.811 "name": null, 00:42:19.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:19.811 "is_configured": false, 00:42:19.811 "data_offset": 2048, 00:42:19.811 "data_size": 63488 00:42:19.811 }, 00:42:19.811 { 00:42:19.811 "name": "BaseBdev3", 00:42:19.811 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:19.811 "is_configured": true, 00:42:19.811 "data_offset": 2048, 00:42:19.811 "data_size": 63488 00:42:19.811 }, 00:42:19.811 { 00:42:19.811 "name": "BaseBdev4", 00:42:19.811 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:19.811 "is_configured": true, 00:42:19.811 "data_offset": 2048, 00:42:19.811 "data_size": 63488 00:42:19.811 } 00:42:19.811 ] 00:42:19.811 }' 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:19.811 [2024-12-09 05:33:06.736082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:19.811 [2024-12-09 05:33:06.736175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:19.811 [2024-12-09 05:33:06.736212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:42:19.811 [2024-12-09 05:33:06.736231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:19.811 [2024-12-09 05:33:06.736926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:19.811 [2024-12-09 05:33:06.736978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:19.811 [2024-12-09 05:33:06.737098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:19.811 [2024-12-09 05:33:06.737128] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:42:19.811 [2024-12-09 05:33:06.737140] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:19.811 [2024-12-09 05:33:06.737175] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:42:19.811 BaseBdev1 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.811 05:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:21.191 "name": "raid_bdev1", 00:42:21.191 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:21.191 "strip_size_kb": 0, 00:42:21.191 "state": "online", 00:42:21.191 "raid_level": "raid1", 00:42:21.191 "superblock": true, 00:42:21.191 "num_base_bdevs": 4, 00:42:21.191 "num_base_bdevs_discovered": 2, 00:42:21.191 "num_base_bdevs_operational": 2, 00:42:21.191 "base_bdevs_list": [ 00:42:21.191 { 00:42:21.191 "name": null, 00:42:21.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:21.191 "is_configured": false, 00:42:21.191 "data_offset": 0, 00:42:21.191 "data_size": 63488 00:42:21.191 }, 00:42:21.191 { 00:42:21.191 "name": null, 00:42:21.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:21.191 "is_configured": false, 00:42:21.191 "data_offset": 2048, 00:42:21.191 "data_size": 63488 00:42:21.191 }, 00:42:21.191 { 00:42:21.191 "name": "BaseBdev3", 00:42:21.191 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:21.191 "is_configured": true, 00:42:21.191 "data_offset": 2048, 00:42:21.191 "data_size": 63488 00:42:21.191 }, 00:42:21.191 { 00:42:21.191 "name": "BaseBdev4", 00:42:21.191 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:21.191 "is_configured": true, 00:42:21.191 "data_offset": 2048, 00:42:21.191 "data_size": 63488 00:42:21.191 } 00:42:21.191 ] 00:42:21.191 }' 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:21.191 05:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.482 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:21.482 "name": "raid_bdev1", 00:42:21.482 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:21.482 "strip_size_kb": 0, 00:42:21.482 "state": "online", 00:42:21.482 "raid_level": "raid1", 00:42:21.482 "superblock": true, 00:42:21.482 "num_base_bdevs": 4, 00:42:21.482 "num_base_bdevs_discovered": 2, 00:42:21.482 "num_base_bdevs_operational": 2, 00:42:21.482 "base_bdevs_list": [ 00:42:21.482 { 00:42:21.482 "name": null, 00:42:21.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:21.482 "is_configured": false, 00:42:21.482 "data_offset": 0, 00:42:21.482 "data_size": 63488 00:42:21.482 }, 00:42:21.482 { 00:42:21.482 "name": null, 00:42:21.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:21.482 "is_configured": false, 00:42:21.483 "data_offset": 2048, 00:42:21.483 "data_size": 63488 00:42:21.483 }, 00:42:21.483 { 00:42:21.483 "name": "BaseBdev3", 00:42:21.483 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:21.483 "is_configured": true, 00:42:21.483 "data_offset": 2048, 00:42:21.483 "data_size": 63488 00:42:21.483 }, 00:42:21.483 { 00:42:21.483 "name": "BaseBdev4", 00:42:21.483 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:21.483 "is_configured": true, 00:42:21.483 "data_offset": 2048, 00:42:21.483 "data_size": 63488 00:42:21.483 } 00:42:21.483 ] 00:42:21.483 }' 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:21.483 [2024-12-09 05:33:08.432650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:21.483 [2024-12-09 05:33:08.432956] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:42:21.483 [2024-12-09 05:33:08.432984] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:21.483 request: 00:42:21.483 { 00:42:21.483 "base_bdev": "BaseBdev1", 00:42:21.483 "raid_bdev": "raid_bdev1", 00:42:21.483 "method": "bdev_raid_add_base_bdev", 00:42:21.483 "req_id": 1 00:42:21.483 } 00:42:21.483 Got JSON-RPC error response 00:42:21.483 response: 00:42:21.483 { 00:42:21.483 "code": -22, 00:42:21.483 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:21.483 } 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:21.483 05:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:22.855 "name": "raid_bdev1", 00:42:22.855 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:22.855 "strip_size_kb": 0, 00:42:22.855 "state": "online", 00:42:22.855 "raid_level": "raid1", 00:42:22.855 "superblock": true, 00:42:22.855 "num_base_bdevs": 4, 00:42:22.855 "num_base_bdevs_discovered": 2, 00:42:22.855 "num_base_bdevs_operational": 2, 00:42:22.855 "base_bdevs_list": [ 00:42:22.855 { 00:42:22.855 "name": null, 00:42:22.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:22.855 "is_configured": false, 00:42:22.855 "data_offset": 0, 00:42:22.855 "data_size": 63488 00:42:22.855 }, 00:42:22.855 { 00:42:22.855 "name": null, 00:42:22.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:22.855 "is_configured": false, 00:42:22.855 "data_offset": 2048, 00:42:22.855 "data_size": 63488 00:42:22.855 }, 00:42:22.855 { 00:42:22.855 "name": "BaseBdev3", 00:42:22.855 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:22.855 "is_configured": true, 00:42:22.855 "data_offset": 2048, 00:42:22.855 "data_size": 63488 00:42:22.855 }, 00:42:22.855 { 00:42:22.855 "name": "BaseBdev4", 00:42:22.855 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:22.855 "is_configured": true, 00:42:22.855 "data_offset": 2048, 00:42:22.855 "data_size": 63488 00:42:22.855 } 00:42:22.855 ] 00:42:22.855 }' 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:22.855 05:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.113 05:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:23.113 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:23.113 05:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:23.113 "name": "raid_bdev1", 00:42:23.113 "uuid": "17971385-a1f9-46ae-b144-c8631a44cd34", 00:42:23.113 "strip_size_kb": 0, 00:42:23.113 "state": "online", 00:42:23.113 "raid_level": "raid1", 00:42:23.113 "superblock": true, 00:42:23.113 "num_base_bdevs": 4, 00:42:23.113 "num_base_bdevs_discovered": 2, 00:42:23.113 "num_base_bdevs_operational": 2, 00:42:23.113 "base_bdevs_list": [ 00:42:23.113 { 00:42:23.113 "name": null, 00:42:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:23.113 "is_configured": false, 00:42:23.113 "data_offset": 0, 00:42:23.113 "data_size": 63488 00:42:23.113 }, 00:42:23.113 { 00:42:23.113 "name": null, 00:42:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:23.113 "is_configured": false, 00:42:23.113 "data_offset": 2048, 00:42:23.113 "data_size": 63488 00:42:23.113 }, 00:42:23.113 { 00:42:23.113 "name": "BaseBdev3", 00:42:23.113 "uuid": "09bc17dd-c049-5b88-ad5c-1d5583b3fa79", 00:42:23.113 "is_configured": true, 00:42:23.113 "data_offset": 2048, 00:42:23.113 "data_size": 63488 00:42:23.113 }, 00:42:23.113 { 00:42:23.113 "name": "BaseBdev4", 00:42:23.113 "uuid": "7080fca4-c6c3-55e8-be19-70da3c7468b1", 00:42:23.113 "is_configured": true, 00:42:23.113 "data_offset": 2048, 00:42:23.113 "data_size": 63488 00:42:23.113 } 00:42:23.113 ] 00:42:23.113 }' 00:42:23.113 05:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78387 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78387 ']' 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78387 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78387 00:42:23.371 killing process with pid 78387 00:42:23.371 Received shutdown signal, test time was about 60.000000 seconds 00:42:23.371 00:42:23.371 Latency(us) 00:42:23.371 [2024-12-09T05:33:10.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.371 [2024-12-09T05:33:10.343Z] =================================================================================================================== 00:42:23.371 [2024-12-09T05:33:10.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78387' 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78387 00:42:23.371 [2024-12-09 05:33:10.182430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:23.371 05:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78387 00:42:23.371 [2024-12-09 05:33:10.182716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:23.371 [2024-12-09 05:33:10.182853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:23.371 [2024-12-09 05:33:10.182874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:42:23.938 [2024-12-09 05:33:10.626743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:24.873 05:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:42:24.873 00:42:24.873 real 0m29.155s 00:42:24.873 user 0m35.294s 00:42:24.873 sys 0m4.137s 00:42:24.873 05:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:24.873 ************************************ 00:42:24.873 END TEST raid_rebuild_test_sb 00:42:24.873 ************************************ 00:42:24.873 05:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:25.131 05:33:11 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:42:25.131 05:33:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:25.131 05:33:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:25.131 05:33:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:25.131 ************************************ 00:42:25.131 START TEST raid_rebuild_test_io 00:42:25.131 ************************************ 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:42:25.131 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79184 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79184 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79184 ']' 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:25.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:25.132 05:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:25.132 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:25.132 Zero copy mechanism will not be used. 00:42:25.132 [2024-12-09 05:33:12.014742] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:42:25.132 [2024-12-09 05:33:12.014932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79184 ] 00:42:25.390 [2024-12-09 05:33:12.215513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.649 [2024-12-09 05:33:12.372849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.649 [2024-12-09 05:33:12.605390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:25.649 [2024-12-09 05:33:12.605494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.218 BaseBdev1_malloc 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.218 [2024-12-09 05:33:13.078982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:26.218 [2024-12-09 05:33:13.079234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:26.218 [2024-12-09 05:33:13.079283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:26.218 [2024-12-09 05:33:13.079308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:26.218 [2024-12-09 05:33:13.082339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:26.218 [2024-12-09 05:33:13.082573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:26.218 BaseBdev1 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.218 BaseBdev2_malloc 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.218 [2024-12-09 05:33:13.132722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:26.218 [2024-12-09 05:33:13.132982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:26.218 [2024-12-09 05:33:13.133035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:26.218 [2024-12-09 05:33:13.133060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:26.218 [2024-12-09 05:33:13.136162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:26.218 [2024-12-09 05:33:13.136377] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:26.218 BaseBdev2 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.218 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 BaseBdev3_malloc 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 [2024-12-09 05:33:13.201381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:42:26.478 [2024-12-09 05:33:13.201465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:26.478 [2024-12-09 05:33:13.201503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:26.478 [2024-12-09 05:33:13.201525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:26.478 [2024-12-09 05:33:13.204473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:26.478 [2024-12-09 05:33:13.204531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:42:26.478 BaseBdev3 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 BaseBdev4_malloc 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 [2024-12-09 05:33:13.250900] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:42:26.478 [2024-12-09 05:33:13.251131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:26.478 [2024-12-09 05:33:13.251179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:42:26.478 [2024-12-09 05:33:13.251204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:26.478 [2024-12-09 05:33:13.254179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:26.478 [2024-12-09 05:33:13.254371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:42:26.478 BaseBdev4 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 spare_malloc 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 spare_delay 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 [2024-12-09 05:33:13.313115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:26.478 [2024-12-09 05:33:13.313220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:26.478 [2024-12-09 05:33:13.313254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:42:26.478 [2024-12-09 05:33:13.313275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:26.478 [2024-12-09 05:33:13.316484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:26.478 [2024-12-09 05:33:13.316680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:26.478 spare 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.478 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.478 [2024-12-09 05:33:13.321397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:26.478 [2024-12-09 05:33:13.324245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:26.478 [2024-12-09 05:33:13.324361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:26.478 [2024-12-09 05:33:13.324461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:26.478 [2024-12-09 05:33:13.324589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:26.478 [2024-12-09 05:33:13.324618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:42:26.479 [2024-12-09 05:33:13.324989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:26.479 [2024-12-09 05:33:13.325242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:26.479 [2024-12-09 05:33:13.325265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:26.479 [2024-12-09 05:33:13.325531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:26.479 "name": "raid_bdev1", 00:42:26.479 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:26.479 "strip_size_kb": 0, 00:42:26.479 "state": "online", 00:42:26.479 "raid_level": "raid1", 00:42:26.479 "superblock": false, 00:42:26.479 "num_base_bdevs": 4, 00:42:26.479 "num_base_bdevs_discovered": 4, 00:42:26.479 "num_base_bdevs_operational": 4, 00:42:26.479 "base_bdevs_list": [ 00:42:26.479 { 00:42:26.479 "name": "BaseBdev1", 00:42:26.479 "uuid": "20a63037-3ca8-5cd1-99c2-1b41942f8d71", 00:42:26.479 "is_configured": true, 00:42:26.479 "data_offset": 0, 00:42:26.479 "data_size": 65536 00:42:26.479 }, 00:42:26.479 { 00:42:26.479 "name": "BaseBdev2", 00:42:26.479 "uuid": "e98b78d0-4a7c-5577-8f2a-1ff2886cdbe5", 00:42:26.479 "is_configured": true, 00:42:26.479 "data_offset": 0, 00:42:26.479 "data_size": 65536 00:42:26.479 }, 00:42:26.479 { 00:42:26.479 "name": "BaseBdev3", 00:42:26.479 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:26.479 "is_configured": true, 00:42:26.479 "data_offset": 0, 00:42:26.479 "data_size": 65536 00:42:26.479 }, 00:42:26.479 { 00:42:26.479 "name": "BaseBdev4", 00:42:26.479 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:26.479 "is_configured": true, 00:42:26.479 "data_offset": 0, 00:42:26.479 "data_size": 65536 00:42:26.479 } 00:42:26.479 ] 00:42:26.479 }' 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:26.479 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 [2024-12-09 05:33:13.882191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:42:27.046 [2024-12-09 05:33:13.981632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:27.046 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.047 05:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.047 05:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.305 05:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:27.305 "name": "raid_bdev1", 00:42:27.305 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:27.305 "strip_size_kb": 0, 00:42:27.305 "state": "online", 00:42:27.305 "raid_level": "raid1", 00:42:27.305 "superblock": false, 00:42:27.305 "num_base_bdevs": 4, 00:42:27.305 "num_base_bdevs_discovered": 3, 00:42:27.305 "num_base_bdevs_operational": 3, 00:42:27.305 "base_bdevs_list": [ 00:42:27.305 { 00:42:27.305 "name": null, 00:42:27.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:27.305 "is_configured": false, 00:42:27.305 "data_offset": 0, 00:42:27.305 "data_size": 65536 00:42:27.305 }, 00:42:27.305 { 00:42:27.305 "name": "BaseBdev2", 00:42:27.305 "uuid": "e98b78d0-4a7c-5577-8f2a-1ff2886cdbe5", 00:42:27.305 "is_configured": true, 00:42:27.305 "data_offset": 0, 00:42:27.305 "data_size": 65536 00:42:27.305 }, 00:42:27.305 { 00:42:27.305 "name": "BaseBdev3", 00:42:27.305 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:27.305 "is_configured": true, 00:42:27.305 "data_offset": 0, 00:42:27.305 "data_size": 65536 00:42:27.305 }, 00:42:27.305 { 00:42:27.305 "name": "BaseBdev4", 00:42:27.305 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:27.305 "is_configured": true, 00:42:27.305 "data_offset": 0, 00:42:27.305 "data_size": 65536 00:42:27.305 } 00:42:27.305 ] 00:42:27.305 }' 00:42:27.305 05:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:27.305 05:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.305 [2024-12-09 05:33:14.094357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:42:27.305 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:27.305 Zero copy mechanism will not be used. 00:42:27.305 Running I/O for 60 seconds... 00:42:27.563 05:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:27.563 05:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.563 05:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:27.563 [2024-12-09 05:33:14.527006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:27.821 05:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.821 05:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:27.821 [2024-12-09 05:33:14.580663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:42:27.821 [2024-12-09 05:33:14.583456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:27.821 [2024-12-09 05:33:14.733352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:28.080 [2024-12-09 05:33:14.971007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:28.080 [2024-12-09 05:33:14.972417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:28.603 142.00 IOPS, 426.00 MiB/s [2024-12-09T05:33:15.575Z] [2024-12-09 05:33:15.336461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:28.603 [2024-12-09 05:33:15.559737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:28.603 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:28.862 "name": "raid_bdev1", 00:42:28.862 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:28.862 "strip_size_kb": 0, 00:42:28.862 "state": "online", 00:42:28.862 "raid_level": "raid1", 00:42:28.862 "superblock": false, 00:42:28.862 "num_base_bdevs": 4, 00:42:28.862 "num_base_bdevs_discovered": 4, 00:42:28.862 "num_base_bdevs_operational": 4, 00:42:28.862 "process": { 00:42:28.862 "type": "rebuild", 00:42:28.862 "target": "spare", 00:42:28.862 "progress": { 00:42:28.862 "blocks": 10240, 00:42:28.862 "percent": 15 00:42:28.862 } 00:42:28.862 }, 00:42:28.862 "base_bdevs_list": [ 00:42:28.862 { 00:42:28.862 "name": "spare", 00:42:28.862 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:28.862 "is_configured": true, 00:42:28.862 "data_offset": 0, 00:42:28.862 "data_size": 65536 00:42:28.862 }, 00:42:28.862 { 00:42:28.862 "name": "BaseBdev2", 00:42:28.862 "uuid": "e98b78d0-4a7c-5577-8f2a-1ff2886cdbe5", 00:42:28.862 "is_configured": true, 00:42:28.862 "data_offset": 0, 00:42:28.862 "data_size": 65536 00:42:28.862 }, 00:42:28.862 { 00:42:28.862 "name": "BaseBdev3", 00:42:28.862 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:28.862 "is_configured": true, 00:42:28.862 "data_offset": 0, 00:42:28.862 "data_size": 65536 00:42:28.862 }, 00:42:28.862 { 00:42:28.862 "name": "BaseBdev4", 00:42:28.862 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:28.862 "is_configured": true, 00:42:28.862 "data_offset": 0, 00:42:28.862 "data_size": 65536 00:42:28.862 } 00:42:28.862 ] 00:42:28.862 }' 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.862 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:28.862 [2024-12-09 05:33:15.727829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:28.862 [2024-12-09 05:33:15.783751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:29.121 [2024-12-09 05:33:15.883860] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:29.121 [2024-12-09 05:33:15.897568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:29.121 [2024-12-09 05:33:15.897832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:29.121 [2024-12-09 05:33:15.897911] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:29.121 [2024-12-09 05:33:15.930092] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:29.121 05:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.121 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:29.121 "name": "raid_bdev1", 00:42:29.121 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:29.121 "strip_size_kb": 0, 00:42:29.121 "state": "online", 00:42:29.121 "raid_level": "raid1", 00:42:29.121 "superblock": false, 00:42:29.121 "num_base_bdevs": 4, 00:42:29.121 "num_base_bdevs_discovered": 3, 00:42:29.121 "num_base_bdevs_operational": 3, 00:42:29.121 "base_bdevs_list": [ 00:42:29.121 { 00:42:29.121 "name": null, 00:42:29.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:29.121 "is_configured": false, 00:42:29.121 "data_offset": 0, 00:42:29.121 "data_size": 65536 00:42:29.121 }, 00:42:29.121 { 00:42:29.121 "name": "BaseBdev2", 00:42:29.121 "uuid": "e98b78d0-4a7c-5577-8f2a-1ff2886cdbe5", 00:42:29.121 "is_configured": true, 00:42:29.121 "data_offset": 0, 00:42:29.121 "data_size": 65536 00:42:29.121 }, 00:42:29.121 { 00:42:29.121 "name": "BaseBdev3", 00:42:29.121 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:29.121 "is_configured": true, 00:42:29.121 "data_offset": 0, 00:42:29.121 "data_size": 65536 00:42:29.121 }, 00:42:29.121 { 00:42:29.121 "name": "BaseBdev4", 00:42:29.121 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:29.121 "is_configured": true, 00:42:29.121 "data_offset": 0, 00:42:29.121 "data_size": 65536 00:42:29.121 } 00:42:29.121 ] 00:42:29.121 }' 00:42:29.121 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:29.121 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:29.639 128.50 IOPS, 385.50 MiB/s [2024-12-09T05:33:16.611Z] 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:29.639 "name": "raid_bdev1", 00:42:29.639 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:29.639 "strip_size_kb": 0, 00:42:29.639 "state": "online", 00:42:29.639 "raid_level": "raid1", 00:42:29.639 "superblock": false, 00:42:29.639 "num_base_bdevs": 4, 00:42:29.639 "num_base_bdevs_discovered": 3, 00:42:29.639 "num_base_bdevs_operational": 3, 00:42:29.639 "base_bdevs_list": [ 00:42:29.639 { 00:42:29.639 "name": null, 00:42:29.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:29.639 "is_configured": false, 00:42:29.639 "data_offset": 0, 00:42:29.639 "data_size": 65536 00:42:29.639 }, 00:42:29.639 { 00:42:29.639 "name": "BaseBdev2", 00:42:29.639 "uuid": "e98b78d0-4a7c-5577-8f2a-1ff2886cdbe5", 00:42:29.639 "is_configured": true, 00:42:29.639 "data_offset": 0, 00:42:29.639 "data_size": 65536 00:42:29.639 }, 00:42:29.639 { 00:42:29.639 "name": "BaseBdev3", 00:42:29.639 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:29.639 "is_configured": true, 00:42:29.639 "data_offset": 0, 00:42:29.639 "data_size": 65536 00:42:29.639 }, 00:42:29.639 { 00:42:29.639 "name": "BaseBdev4", 00:42:29.639 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:29.639 "is_configured": true, 00:42:29.639 "data_offset": 0, 00:42:29.639 "data_size": 65536 00:42:29.639 } 00:42:29.639 ] 00:42:29.639 }' 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.639 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:29.639 [2024-12-09 05:33:16.595251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:29.898 05:33:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.898 05:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:29.898 [2024-12-09 05:33:16.681183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:42:29.898 [2024-12-09 05:33:16.683997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:29.898 [2024-12-09 05:33:16.818708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:30.156 [2024-12-09 05:33:16.931295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:30.156 [2024-12-09 05:33:16.931973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:30.416 154.33 IOPS, 463.00 MiB/s [2024-12-09T05:33:17.388Z] [2024-12-09 05:33:17.308754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:30.416 [2024-12-09 05:33:17.309417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:31.009 [2024-12-09 05:33:17.648802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.009 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:31.009 "name": "raid_bdev1", 00:42:31.009 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:31.009 "strip_size_kb": 0, 00:42:31.009 "state": "online", 00:42:31.009 "raid_level": "raid1", 00:42:31.009 "superblock": false, 00:42:31.009 "num_base_bdevs": 4, 00:42:31.009 "num_base_bdevs_discovered": 4, 00:42:31.009 "num_base_bdevs_operational": 4, 00:42:31.009 "process": { 00:42:31.009 "type": "rebuild", 00:42:31.009 "target": "spare", 00:42:31.009 "progress": { 00:42:31.009 "blocks": 14336, 00:42:31.009 "percent": 21 00:42:31.009 } 00:42:31.009 }, 00:42:31.009 "base_bdevs_list": [ 00:42:31.009 { 00:42:31.009 "name": "spare", 00:42:31.009 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:31.009 "is_configured": true, 00:42:31.009 "data_offset": 0, 00:42:31.009 "data_size": 65536 00:42:31.009 }, 00:42:31.009 { 00:42:31.010 "name": "BaseBdev2", 00:42:31.010 "uuid": "e98b78d0-4a7c-5577-8f2a-1ff2886cdbe5", 00:42:31.010 "is_configured": true, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 }, 00:42:31.010 { 00:42:31.010 "name": "BaseBdev3", 00:42:31.010 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:31.010 "is_configured": true, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 }, 00:42:31.010 { 00:42:31.010 "name": "BaseBdev4", 00:42:31.010 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:31.010 "is_configured": true, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 } 00:42:31.010 ] 00:42:31.010 }' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:31.010 [2024-12-09 05:33:17.822675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:31.010 [2024-12-09 05:33:17.862558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:31.010 [2024-12-09 05:33:17.882448] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:42:31.010 [2024-12-09 05:33:17.882551] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:31.010 "name": "raid_bdev1", 00:42:31.010 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:31.010 "strip_size_kb": 0, 00:42:31.010 "state": "online", 00:42:31.010 "raid_level": "raid1", 00:42:31.010 "superblock": false, 00:42:31.010 "num_base_bdevs": 4, 00:42:31.010 "num_base_bdevs_discovered": 3, 00:42:31.010 "num_base_bdevs_operational": 3, 00:42:31.010 "process": { 00:42:31.010 "type": "rebuild", 00:42:31.010 "target": "spare", 00:42:31.010 "progress": { 00:42:31.010 "blocks": 16384, 00:42:31.010 "percent": 25 00:42:31.010 } 00:42:31.010 }, 00:42:31.010 "base_bdevs_list": [ 00:42:31.010 { 00:42:31.010 "name": "spare", 00:42:31.010 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:31.010 "is_configured": true, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 }, 00:42:31.010 { 00:42:31.010 "name": null, 00:42:31.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:31.010 "is_configured": false, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 }, 00:42:31.010 { 00:42:31.010 "name": "BaseBdev3", 00:42:31.010 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:31.010 "is_configured": true, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 }, 00:42:31.010 { 00:42:31.010 "name": "BaseBdev4", 00:42:31.010 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:31.010 "is_configured": true, 00:42:31.010 "data_offset": 0, 00:42:31.010 "data_size": 65536 00:42:31.010 } 00:42:31.010 ] 00:42:31.010 }' 00:42:31.010 05:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=534 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:31.269 05:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.269 132.00 IOPS, 396.00 MiB/s [2024-12-09T05:33:18.241Z] 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:31.269 "name": "raid_bdev1", 00:42:31.269 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:31.269 "strip_size_kb": 0, 00:42:31.269 "state": "online", 00:42:31.269 "raid_level": "raid1", 00:42:31.269 "superblock": false, 00:42:31.269 "num_base_bdevs": 4, 00:42:31.269 "num_base_bdevs_discovered": 3, 00:42:31.269 "num_base_bdevs_operational": 3, 00:42:31.269 "process": { 00:42:31.269 "type": "rebuild", 00:42:31.269 "target": "spare", 00:42:31.269 "progress": { 00:42:31.269 "blocks": 18432, 00:42:31.269 "percent": 28 00:42:31.269 } 00:42:31.269 }, 00:42:31.269 "base_bdevs_list": [ 00:42:31.269 { 00:42:31.269 "name": "spare", 00:42:31.269 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:31.269 "is_configured": true, 00:42:31.269 "data_offset": 0, 00:42:31.269 "data_size": 65536 00:42:31.269 }, 00:42:31.269 { 00:42:31.269 "name": null, 00:42:31.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:31.269 "is_configured": false, 00:42:31.269 "data_offset": 0, 00:42:31.269 "data_size": 65536 00:42:31.269 }, 00:42:31.269 { 00:42:31.270 "name": "BaseBdev3", 00:42:31.270 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:31.270 "is_configured": true, 00:42:31.270 "data_offset": 0, 00:42:31.270 "data_size": 65536 00:42:31.270 }, 00:42:31.270 { 00:42:31.270 "name": "BaseBdev4", 00:42:31.270 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:31.270 "is_configured": true, 00:42:31.270 "data_offset": 0, 00:42:31.270 "data_size": 65536 00:42:31.270 } 00:42:31.270 ] 00:42:31.270 }' 00:42:31.270 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:31.270 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:31.270 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:31.270 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:31.270 05:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:31.528 [2024-12-09 05:33:18.435078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:42:31.786 [2024-12-09 05:33:18.545528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:42:32.044 [2024-12-09 05:33:18.867009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:42:32.044 [2024-12-09 05:33:18.867396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:42:32.301 120.20 IOPS, 360.60 MiB/s [2024-12-09T05:33:19.274Z] 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:32.302 05:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.559 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:32.559 "name": "raid_bdev1", 00:42:32.559 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:32.559 "strip_size_kb": 0, 00:42:32.559 "state": "online", 00:42:32.559 "raid_level": "raid1", 00:42:32.559 "superblock": false, 00:42:32.559 "num_base_bdevs": 4, 00:42:32.559 "num_base_bdevs_discovered": 3, 00:42:32.559 "num_base_bdevs_operational": 3, 00:42:32.559 "process": { 00:42:32.559 "type": "rebuild", 00:42:32.559 "target": "spare", 00:42:32.559 "progress": { 00:42:32.559 "blocks": 40960, 00:42:32.559 "percent": 62 00:42:32.559 } 00:42:32.560 }, 00:42:32.560 "base_bdevs_list": [ 00:42:32.560 { 00:42:32.560 "name": "spare", 00:42:32.560 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:32.560 "is_configured": true, 00:42:32.560 "data_offset": 0, 00:42:32.560 "data_size": 65536 00:42:32.560 }, 00:42:32.560 { 00:42:32.560 "name": null, 00:42:32.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:32.560 "is_configured": false, 00:42:32.560 "data_offset": 0, 00:42:32.560 "data_size": 65536 00:42:32.560 }, 00:42:32.560 { 00:42:32.560 "name": "BaseBdev3", 00:42:32.560 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:32.560 "is_configured": true, 00:42:32.560 "data_offset": 0, 00:42:32.560 "data_size": 65536 00:42:32.560 }, 00:42:32.560 { 00:42:32.560 "name": "BaseBdev4", 00:42:32.560 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:32.560 "is_configured": true, 00:42:32.560 "data_offset": 0, 00:42:32.560 "data_size": 65536 00:42:32.560 } 00:42:32.560 ] 00:42:32.560 }' 00:42:32.560 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:32.560 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:32.560 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:32.560 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:32.560 05:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:33.126 [2024-12-09 05:33:19.846430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:42:33.126 [2024-12-09 05:33:19.847153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:42:33.384 106.00 IOPS, 318.00 MiB/s [2024-12-09T05:33:20.356Z] [2024-12-09 05:33:20.309164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:33.642 "name": "raid_bdev1", 00:42:33.642 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:33.642 "strip_size_kb": 0, 00:42:33.642 "state": "online", 00:42:33.642 "raid_level": "raid1", 00:42:33.642 "superblock": false, 00:42:33.642 "num_base_bdevs": 4, 00:42:33.642 "num_base_bdevs_discovered": 3, 00:42:33.642 "num_base_bdevs_operational": 3, 00:42:33.642 "process": { 00:42:33.642 "type": "rebuild", 00:42:33.642 "target": "spare", 00:42:33.642 "progress": { 00:42:33.642 "blocks": 59392, 00:42:33.642 "percent": 90 00:42:33.642 } 00:42:33.642 }, 00:42:33.642 "base_bdevs_list": [ 00:42:33.642 { 00:42:33.642 "name": "spare", 00:42:33.642 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:33.642 "is_configured": true, 00:42:33.642 "data_offset": 0, 00:42:33.642 "data_size": 65536 00:42:33.642 }, 00:42:33.642 { 00:42:33.642 "name": null, 00:42:33.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:33.642 "is_configured": false, 00:42:33.642 "data_offset": 0, 00:42:33.642 "data_size": 65536 00:42:33.642 }, 00:42:33.642 { 00:42:33.642 "name": "BaseBdev3", 00:42:33.642 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:33.642 "is_configured": true, 00:42:33.642 "data_offset": 0, 00:42:33.642 "data_size": 65536 00:42:33.642 }, 00:42:33.642 { 00:42:33.642 "name": "BaseBdev4", 00:42:33.642 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:33.642 "is_configured": true, 00:42:33.642 "data_offset": 0, 00:42:33.642 "data_size": 65536 00:42:33.642 } 00:42:33.642 ] 00:42:33.642 }' 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:33.642 05:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:33.899 [2024-12-09 05:33:20.749831] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:33.899 [2024-12-09 05:33:20.856381] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:33.899 [2024-12-09 05:33:20.859105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:34.721 95.57 IOPS, 286.71 MiB/s [2024-12-09T05:33:21.693Z] 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.721 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:34.721 "name": "raid_bdev1", 00:42:34.721 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:34.721 "strip_size_kb": 0, 00:42:34.721 "state": "online", 00:42:34.721 "raid_level": "raid1", 00:42:34.721 "superblock": false, 00:42:34.721 "num_base_bdevs": 4, 00:42:34.721 "num_base_bdevs_discovered": 3, 00:42:34.721 "num_base_bdevs_operational": 3, 00:42:34.721 "base_bdevs_list": [ 00:42:34.721 { 00:42:34.721 "name": "spare", 00:42:34.721 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:34.721 "is_configured": true, 00:42:34.721 "data_offset": 0, 00:42:34.721 "data_size": 65536 00:42:34.721 }, 00:42:34.721 { 00:42:34.721 "name": null, 00:42:34.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.721 "is_configured": false, 00:42:34.721 "data_offset": 0, 00:42:34.721 "data_size": 65536 00:42:34.721 }, 00:42:34.721 { 00:42:34.721 "name": "BaseBdev3", 00:42:34.721 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:34.721 "is_configured": true, 00:42:34.721 "data_offset": 0, 00:42:34.721 "data_size": 65536 00:42:34.721 }, 00:42:34.721 { 00:42:34.721 "name": "BaseBdev4", 00:42:34.721 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:34.721 "is_configured": true, 00:42:34.721 "data_offset": 0, 00:42:34.721 "data_size": 65536 00:42:34.721 } 00:42:34.721 ] 00:42:34.721 }' 00:42:34.722 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:34.722 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:34.722 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:34.979 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:34.979 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:42:34.979 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:34.979 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:34.979 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:34.979 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:34.980 "name": "raid_bdev1", 00:42:34.980 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:34.980 "strip_size_kb": 0, 00:42:34.980 "state": "online", 00:42:34.980 "raid_level": "raid1", 00:42:34.980 "superblock": false, 00:42:34.980 "num_base_bdevs": 4, 00:42:34.980 "num_base_bdevs_discovered": 3, 00:42:34.980 "num_base_bdevs_operational": 3, 00:42:34.980 "base_bdevs_list": [ 00:42:34.980 { 00:42:34.980 "name": "spare", 00:42:34.980 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:34.980 "is_configured": true, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 }, 00:42:34.980 { 00:42:34.980 "name": null, 00:42:34.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.980 "is_configured": false, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 }, 00:42:34.980 { 00:42:34.980 "name": "BaseBdev3", 00:42:34.980 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:34.980 "is_configured": true, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 }, 00:42:34.980 { 00:42:34.980 "name": "BaseBdev4", 00:42:34.980 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:34.980 "is_configured": true, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 } 00:42:34.980 ] 00:42:34.980 }' 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:34.980 "name": "raid_bdev1", 00:42:34.980 "uuid": "8bfc8533-bb28-40a1-bb9c-c35aeab86b8f", 00:42:34.980 "strip_size_kb": 0, 00:42:34.980 "state": "online", 00:42:34.980 "raid_level": "raid1", 00:42:34.980 "superblock": false, 00:42:34.980 "num_base_bdevs": 4, 00:42:34.980 "num_base_bdevs_discovered": 3, 00:42:34.980 "num_base_bdevs_operational": 3, 00:42:34.980 "base_bdevs_list": [ 00:42:34.980 { 00:42:34.980 "name": "spare", 00:42:34.980 "uuid": "90adb64d-3d75-5316-87d1-5a304bb17e68", 00:42:34.980 "is_configured": true, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 }, 00:42:34.980 { 00:42:34.980 "name": null, 00:42:34.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.980 "is_configured": false, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 }, 00:42:34.980 { 00:42:34.980 "name": "BaseBdev3", 00:42:34.980 "uuid": "ed690dd5-d957-55f0-9e63-aa320fc08fec", 00:42:34.980 "is_configured": true, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 }, 00:42:34.980 { 00:42:34.980 "name": "BaseBdev4", 00:42:34.980 "uuid": "cb007256-394c-510c-93b3-4232a1762664", 00:42:34.980 "is_configured": true, 00:42:34.980 "data_offset": 0, 00:42:34.980 "data_size": 65536 00:42:34.980 } 00:42:34.980 ] 00:42:34.980 }' 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:34.980 05:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:35.497 89.12 IOPS, 267.38 MiB/s [2024-12-09T05:33:22.469Z] 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:35.497 [2024-12-09 05:33:22.382269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:35.497 [2024-12-09 05:33:22.382316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:35.497 00:42:35.497 Latency(us) 00:42:35.497 [2024-12-09T05:33:22.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.497 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:42:35.497 raid_bdev1 : 8.32 86.93 260.80 0.00 0.00 16612.15 273.69 122016.12 00:42:35.497 [2024-12-09T05:33:22.469Z] =================================================================================================================== 00:42:35.497 [2024-12-09T05:33:22.469Z] Total : 86.93 260.80 0.00 0.00 16612.15 273.69 122016.12 00:42:35.497 [2024-12-09 05:33:22.430862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:35.497 [2024-12-09 05:33:22.430973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:35.497 { 00:42:35.497 "results": [ 00:42:35.497 { 00:42:35.497 "job": "raid_bdev1", 00:42:35.497 "core_mask": "0x1", 00:42:35.497 "workload": "randrw", 00:42:35.497 "percentage": 50, 00:42:35.497 "status": "finished", 00:42:35.497 "queue_depth": 2, 00:42:35.497 "io_size": 3145728, 00:42:35.497 "runtime": 8.316624, 00:42:35.497 "iops": 86.9343137311486, 00:42:35.497 "mibps": 260.8029411934458, 00:42:35.497 "io_failed": 0, 00:42:35.497 "io_timeout": 0, 00:42:35.497 "avg_latency_us": 16612.148693574753, 00:42:35.497 "min_latency_us": 273.6872727272727, 00:42:35.497 "max_latency_us": 122016.11636363636 00:42:35.497 } 00:42:35.497 ], 00:42:35.497 "core_count": 1 00:42:35.497 } 00:42:35.497 [2024-12-09 05:33:22.431109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:35.497 [2024-12-09 05:33:22.431137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:35.497 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:35.756 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:42:35.756 /dev/nbd0 00:42:36.015 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:36.015 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:36.015 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:36.015 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:42:36.015 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:36.015 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:36.016 1+0 records in 00:42:36.016 1+0 records out 00:42:36.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386528 s, 10.6 MB/s 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:36.016 05:33:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:42:36.275 /dev/nbd1 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:36.275 1+0 records in 00:42:36.275 1+0 records out 00:42:36.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039988 s, 10.2 MB/s 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:36.275 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:36.534 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:42:36.793 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:36.794 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:42:37.053 /dev/nbd1 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:37.053 1+0 records in 00:42:37.053 1+0 records out 00:42:37.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385857 s, 10.6 MB/s 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:37.053 05:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:37.053 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:37.622 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79184 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79184 ']' 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79184 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79184 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:37.881 killing process with pid 79184 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79184' 00:42:37.881 Received shutdown signal, test time was about 10.564940 seconds 00:42:37.881 00:42:37.881 Latency(us) 00:42:37.881 [2024-12-09T05:33:24.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.881 [2024-12-09T05:33:24.853Z] =================================================================================================================== 00:42:37.881 [2024-12-09T05:33:24.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79184 00:42:37.881 [2024-12-09 05:33:24.662460] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:37.881 05:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79184 00:42:38.140 [2024-12-09 05:33:25.015386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:42:39.543 00:42:39.543 real 0m14.185s 00:42:39.543 user 0m18.552s 00:42:39.543 sys 0m1.917s 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:39.543 ************************************ 00:42:39.543 END TEST raid_rebuild_test_io 00:42:39.543 ************************************ 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:42:39.543 05:33:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:42:39.543 05:33:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:39.543 05:33:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:39.543 05:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:39.543 ************************************ 00:42:39.543 START TEST raid_rebuild_test_sb_io 00:42:39.543 ************************************ 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:39.543 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79601 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79601 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79601 ']' 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:39.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:39.544 05:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:39.544 [2024-12-09 05:33:26.239303] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:42:39.544 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:39.544 Zero copy mechanism will not be used. 00:42:39.544 [2024-12-09 05:33:26.239476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79601 ] 00:42:39.544 [2024-12-09 05:33:26.417016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.802 [2024-12-09 05:33:26.533210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.802 [2024-12-09 05:33:26.713096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:39.802 [2024-12-09 05:33:26.713164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.369 BaseBdev1_malloc 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.369 [2024-12-09 05:33:27.309356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:40.369 [2024-12-09 05:33:27.309460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:40.369 [2024-12-09 05:33:27.309495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:40.369 [2024-12-09 05:33:27.309515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:40.369 [2024-12-09 05:33:27.312216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:40.369 [2024-12-09 05:33:27.312278] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:40.369 BaseBdev1 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.369 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 BaseBdev2_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 [2024-12-09 05:33:27.361640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:40.628 [2024-12-09 05:33:27.361733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:40.628 [2024-12-09 05:33:27.361783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:40.628 [2024-12-09 05:33:27.361809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:40.628 [2024-12-09 05:33:27.364747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:40.628 [2024-12-09 05:33:27.364814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:40.628 BaseBdev2 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 BaseBdev3_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 [2024-12-09 05:33:27.424433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:42:40.628 [2024-12-09 05:33:27.424546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:40.628 [2024-12-09 05:33:27.424580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:40.628 [2024-12-09 05:33:27.424601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:40.628 [2024-12-09 05:33:27.427429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:40.628 [2024-12-09 05:33:27.427481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:42:40.628 BaseBdev3 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 BaseBdev4_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 [2024-12-09 05:33:27.480205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:42:40.628 [2024-12-09 05:33:27.480303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:40.628 [2024-12-09 05:33:27.480336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:42:40.628 [2024-12-09 05:33:27.480356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:40.628 [2024-12-09 05:33:27.483013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:40.628 [2024-12-09 05:33:27.483336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:42:40.628 BaseBdev4 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 spare_malloc 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 spare_delay 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 [2024-12-09 05:33:27.541297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:40.628 [2024-12-09 05:33:27.541434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:40.628 [2024-12-09 05:33:27.541491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:42:40.628 [2024-12-09 05:33:27.541521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:40.628 [2024-12-09 05:33:27.544577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:40.628 [2024-12-09 05:33:27.544804] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:40.628 spare 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.628 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.628 [2024-12-09 05:33:27.549529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:40.628 [2024-12-09 05:33:27.552015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:40.628 [2024-12-09 05:33:27.552110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:40.628 [2024-12-09 05:33:27.552202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:40.629 [2024-12-09 05:33:27.552457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:40.629 [2024-12-09 05:33:27.552482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:42:40.629 [2024-12-09 05:33:27.552816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:40.629 [2024-12-09 05:33:27.553062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:40.629 [2024-12-09 05:33:27.553081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:40.629 [2024-12-09 05:33:27.553313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:40.629 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.887 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:40.887 "name": "raid_bdev1", 00:42:40.887 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:40.887 "strip_size_kb": 0, 00:42:40.887 "state": "online", 00:42:40.887 "raid_level": "raid1", 00:42:40.887 "superblock": true, 00:42:40.887 "num_base_bdevs": 4, 00:42:40.887 "num_base_bdevs_discovered": 4, 00:42:40.887 "num_base_bdevs_operational": 4, 00:42:40.887 "base_bdevs_list": [ 00:42:40.887 { 00:42:40.887 "name": "BaseBdev1", 00:42:40.887 "uuid": "e7c90c0b-f8da-593b-84e4-b8ea41219a0a", 00:42:40.887 "is_configured": true, 00:42:40.887 "data_offset": 2048, 00:42:40.887 "data_size": 63488 00:42:40.887 }, 00:42:40.887 { 00:42:40.887 "name": "BaseBdev2", 00:42:40.887 "uuid": "eb8d17c4-fce6-52bb-98e8-56040bf8e3c4", 00:42:40.887 "is_configured": true, 00:42:40.887 "data_offset": 2048, 00:42:40.887 "data_size": 63488 00:42:40.887 }, 00:42:40.887 { 00:42:40.887 "name": "BaseBdev3", 00:42:40.887 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:40.887 "is_configured": true, 00:42:40.887 "data_offset": 2048, 00:42:40.887 "data_size": 63488 00:42:40.887 }, 00:42:40.887 { 00:42:40.887 "name": "BaseBdev4", 00:42:40.887 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:40.887 "is_configured": true, 00:42:40.887 "data_offset": 2048, 00:42:40.887 "data_size": 63488 00:42:40.887 } 00:42:40.887 ] 00:42:40.887 }' 00:42:40.887 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:40.887 05:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.146 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:42:41.146 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:41.146 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.146 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.146 [2024-12-09 05:33:28.102335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.404 [2024-12-09 05:33:28.213731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:41.404 "name": "raid_bdev1", 00:42:41.404 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:41.404 "strip_size_kb": 0, 00:42:41.404 "state": "online", 00:42:41.404 "raid_level": "raid1", 00:42:41.404 "superblock": true, 00:42:41.404 "num_base_bdevs": 4, 00:42:41.404 "num_base_bdevs_discovered": 3, 00:42:41.404 "num_base_bdevs_operational": 3, 00:42:41.404 "base_bdevs_list": [ 00:42:41.404 { 00:42:41.404 "name": null, 00:42:41.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:41.404 "is_configured": false, 00:42:41.404 "data_offset": 0, 00:42:41.404 "data_size": 63488 00:42:41.404 }, 00:42:41.404 { 00:42:41.404 "name": "BaseBdev2", 00:42:41.404 "uuid": "eb8d17c4-fce6-52bb-98e8-56040bf8e3c4", 00:42:41.404 "is_configured": true, 00:42:41.404 "data_offset": 2048, 00:42:41.404 "data_size": 63488 00:42:41.404 }, 00:42:41.404 { 00:42:41.404 "name": "BaseBdev3", 00:42:41.404 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:41.404 "is_configured": true, 00:42:41.404 "data_offset": 2048, 00:42:41.404 "data_size": 63488 00:42:41.404 }, 00:42:41.404 { 00:42:41.404 "name": "BaseBdev4", 00:42:41.404 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:41.404 "is_configured": true, 00:42:41.404 "data_offset": 2048, 00:42:41.404 "data_size": 63488 00:42:41.404 } 00:42:41.404 ] 00:42:41.404 }' 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:41.404 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.404 [2024-12-09 05:33:28.353147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:42:41.404 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:41.404 Zero copy mechanism will not be used. 00:42:41.404 Running I/O for 60 seconds... 00:42:41.970 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:41.970 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.970 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:41.970 [2024-12-09 05:33:28.780919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:41.970 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.970 05:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:41.970 [2024-12-09 05:33:28.853263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:42:41.970 [2024-12-09 05:33:28.856655] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:42.228 [2024-12-09 05:33:28.974008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:42.228 [2024-12-09 05:33:28.975904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:42.228 [2024-12-09 05:33:29.181165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:42.229 [2024-12-09 05:33:29.181890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:42.486 138.00 IOPS, 414.00 MiB/s [2024-12-09T05:33:29.459Z] [2024-12-09 05:33:29.439104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:42.487 [2024-12-09 05:33:29.441050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:42.744 [2024-12-09 05:33:29.661062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:43.003 "name": "raid_bdev1", 00:42:43.003 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:43.003 "strip_size_kb": 0, 00:42:43.003 "state": "online", 00:42:43.003 "raid_level": "raid1", 00:42:43.003 "superblock": true, 00:42:43.003 "num_base_bdevs": 4, 00:42:43.003 "num_base_bdevs_discovered": 4, 00:42:43.003 "num_base_bdevs_operational": 4, 00:42:43.003 "process": { 00:42:43.003 "type": "rebuild", 00:42:43.003 "target": "spare", 00:42:43.003 "progress": { 00:42:43.003 "blocks": 10240, 00:42:43.003 "percent": 16 00:42:43.003 } 00:42:43.003 }, 00:42:43.003 "base_bdevs_list": [ 00:42:43.003 { 00:42:43.003 "name": "spare", 00:42:43.003 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:43.003 "is_configured": true, 00:42:43.003 "data_offset": 2048, 00:42:43.003 "data_size": 63488 00:42:43.003 }, 00:42:43.003 { 00:42:43.003 "name": "BaseBdev2", 00:42:43.003 "uuid": "eb8d17c4-fce6-52bb-98e8-56040bf8e3c4", 00:42:43.003 "is_configured": true, 00:42:43.003 "data_offset": 2048, 00:42:43.003 "data_size": 63488 00:42:43.003 }, 00:42:43.003 { 00:42:43.003 "name": "BaseBdev3", 00:42:43.003 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:43.003 "is_configured": true, 00:42:43.003 "data_offset": 2048, 00:42:43.003 "data_size": 63488 00:42:43.003 }, 00:42:43.003 { 00:42:43.003 "name": "BaseBdev4", 00:42:43.003 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:43.003 "is_configured": true, 00:42:43.003 "data_offset": 2048, 00:42:43.003 "data_size": 63488 00:42:43.003 } 00:42:43.003 ] 00:42:43.003 }' 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:43.003 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:43.261 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:43.261 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:43.261 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.261 05:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:43.261 [2024-12-09 05:33:30.005741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:43.261 [2024-12-09 05:33:30.005905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:43.261 [2024-12-09 05:33:30.007510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:43.261 [2024-12-09 05:33:30.117752] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:43.261 [2024-12-09 05:33:30.130159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:43.261 [2024-12-09 05:33:30.130417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:43.261 [2024-12-09 05:33:30.130485] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:43.261 [2024-12-09 05:33:30.168078] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:43.261 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:43.262 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.520 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:43.520 "name": "raid_bdev1", 00:42:43.520 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:43.520 "strip_size_kb": 0, 00:42:43.520 "state": "online", 00:42:43.520 "raid_level": "raid1", 00:42:43.520 "superblock": true, 00:42:43.520 "num_base_bdevs": 4, 00:42:43.520 "num_base_bdevs_discovered": 3, 00:42:43.520 "num_base_bdevs_operational": 3, 00:42:43.520 "base_bdevs_list": [ 00:42:43.520 { 00:42:43.520 "name": null, 00:42:43.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:43.520 "is_configured": false, 00:42:43.520 "data_offset": 0, 00:42:43.520 "data_size": 63488 00:42:43.520 }, 00:42:43.520 { 00:42:43.520 "name": "BaseBdev2", 00:42:43.520 "uuid": "eb8d17c4-fce6-52bb-98e8-56040bf8e3c4", 00:42:43.520 "is_configured": true, 00:42:43.520 "data_offset": 2048, 00:42:43.520 "data_size": 63488 00:42:43.520 }, 00:42:43.520 { 00:42:43.520 "name": "BaseBdev3", 00:42:43.520 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:43.520 "is_configured": true, 00:42:43.520 "data_offset": 2048, 00:42:43.520 "data_size": 63488 00:42:43.520 }, 00:42:43.520 { 00:42:43.520 "name": "BaseBdev4", 00:42:43.520 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:43.520 "is_configured": true, 00:42:43.520 "data_offset": 2048, 00:42:43.520 "data_size": 63488 00:42:43.520 } 00:42:43.520 ] 00:42:43.520 }' 00:42:43.520 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:43.520 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:43.777 113.50 IOPS, 340.50 MiB/s [2024-12-09T05:33:30.749Z] 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.777 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:44.035 "name": "raid_bdev1", 00:42:44.035 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:44.035 "strip_size_kb": 0, 00:42:44.035 "state": "online", 00:42:44.035 "raid_level": "raid1", 00:42:44.035 "superblock": true, 00:42:44.035 "num_base_bdevs": 4, 00:42:44.035 "num_base_bdevs_discovered": 3, 00:42:44.035 "num_base_bdevs_operational": 3, 00:42:44.035 "base_bdevs_list": [ 00:42:44.035 { 00:42:44.035 "name": null, 00:42:44.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:44.035 "is_configured": false, 00:42:44.035 "data_offset": 0, 00:42:44.035 "data_size": 63488 00:42:44.035 }, 00:42:44.035 { 00:42:44.035 "name": "BaseBdev2", 00:42:44.035 "uuid": "eb8d17c4-fce6-52bb-98e8-56040bf8e3c4", 00:42:44.035 "is_configured": true, 00:42:44.035 "data_offset": 2048, 00:42:44.035 "data_size": 63488 00:42:44.035 }, 00:42:44.035 { 00:42:44.035 "name": "BaseBdev3", 00:42:44.035 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:44.035 "is_configured": true, 00:42:44.035 "data_offset": 2048, 00:42:44.035 "data_size": 63488 00:42:44.035 }, 00:42:44.035 { 00:42:44.035 "name": "BaseBdev4", 00:42:44.035 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:44.035 "is_configured": true, 00:42:44.035 "data_offset": 2048, 00:42:44.035 "data_size": 63488 00:42:44.035 } 00:42:44.035 ] 00:42:44.035 }' 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:44.035 [2024-12-09 05:33:30.892126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.035 05:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:44.035 [2024-12-09 05:33:30.964625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:42:44.036 [2024-12-09 05:33:30.967158] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:44.294 [2024-12-09 05:33:31.083588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:44.294 [2024-12-09 05:33:31.085257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:44.552 [2024-12-09 05:33:31.297539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:44.552 [2024-12-09 05:33:31.298402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:44.810 127.67 IOPS, 383.00 MiB/s [2024-12-09T05:33:31.782Z] [2024-12-09 05:33:31.629619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:44.810 [2024-12-09 05:33:31.631325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:45.068 [2024-12-09 05:33:31.867407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:45.068 [2024-12-09 05:33:31.867820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:45.068 "name": "raid_bdev1", 00:42:45.068 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:45.068 "strip_size_kb": 0, 00:42:45.068 "state": "online", 00:42:45.068 "raid_level": "raid1", 00:42:45.068 "superblock": true, 00:42:45.068 "num_base_bdevs": 4, 00:42:45.068 "num_base_bdevs_discovered": 4, 00:42:45.068 "num_base_bdevs_operational": 4, 00:42:45.068 "process": { 00:42:45.068 "type": "rebuild", 00:42:45.068 "target": "spare", 00:42:45.068 "progress": { 00:42:45.068 "blocks": 10240, 00:42:45.068 "percent": 16 00:42:45.068 } 00:42:45.068 }, 00:42:45.068 "base_bdevs_list": [ 00:42:45.068 { 00:42:45.068 "name": "spare", 00:42:45.068 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:45.068 "is_configured": true, 00:42:45.068 "data_offset": 2048, 00:42:45.068 "data_size": 63488 00:42:45.068 }, 00:42:45.068 { 00:42:45.068 "name": "BaseBdev2", 00:42:45.068 "uuid": "eb8d17c4-fce6-52bb-98e8-56040bf8e3c4", 00:42:45.068 "is_configured": true, 00:42:45.068 "data_offset": 2048, 00:42:45.068 "data_size": 63488 00:42:45.068 }, 00:42:45.068 { 00:42:45.068 "name": "BaseBdev3", 00:42:45.068 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:45.068 "is_configured": true, 00:42:45.068 "data_offset": 2048, 00:42:45.068 "data_size": 63488 00:42:45.068 }, 00:42:45.068 { 00:42:45.068 "name": "BaseBdev4", 00:42:45.068 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:45.068 "is_configured": true, 00:42:45.068 "data_offset": 2048, 00:42:45.068 "data_size": 63488 00:42:45.068 } 00:42:45.068 ] 00:42:45.068 }' 00:42:45.068 05:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:45.068 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:42:45.326 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.326 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:45.326 [2024-12-09 05:33:32.096023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:45.326 [2024-12-09 05:33:32.191926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:45.583 114.75 IOPS, 344.25 MiB/s [2024-12-09T05:33:32.555Z] [2024-12-09 05:33:32.402216] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:42:45.583 [2024-12-09 05:33:32.402308] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:42:45.583 [2024-12-09 05:33:32.404125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:45.583 "name": "raid_bdev1", 00:42:45.583 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:45.583 "strip_size_kb": 0, 00:42:45.583 "state": "online", 00:42:45.583 "raid_level": "raid1", 00:42:45.583 "superblock": true, 00:42:45.583 "num_base_bdevs": 4, 00:42:45.583 "num_base_bdevs_discovered": 3, 00:42:45.583 "num_base_bdevs_operational": 3, 00:42:45.583 "process": { 00:42:45.583 "type": "rebuild", 00:42:45.583 "target": "spare", 00:42:45.583 "progress": { 00:42:45.583 "blocks": 14336, 00:42:45.583 "percent": 22 00:42:45.583 } 00:42:45.583 }, 00:42:45.583 "base_bdevs_list": [ 00:42:45.583 { 00:42:45.583 "name": "spare", 00:42:45.583 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:45.583 "is_configured": true, 00:42:45.583 "data_offset": 2048, 00:42:45.583 "data_size": 63488 00:42:45.583 }, 00:42:45.583 { 00:42:45.583 "name": null, 00:42:45.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:45.583 "is_configured": false, 00:42:45.583 "data_offset": 0, 00:42:45.583 "data_size": 63488 00:42:45.583 }, 00:42:45.583 { 00:42:45.583 "name": "BaseBdev3", 00:42:45.583 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:45.583 "is_configured": true, 00:42:45.583 "data_offset": 2048, 00:42:45.583 "data_size": 63488 00:42:45.583 }, 00:42:45.583 { 00:42:45.583 "name": "BaseBdev4", 00:42:45.583 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:45.583 "is_configured": true, 00:42:45.583 "data_offset": 2048, 00:42:45.583 "data_size": 63488 00:42:45.583 } 00:42:45.583 ] 00:42:45.583 }' 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:45.583 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=548 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.839 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:45.839 "name": "raid_bdev1", 00:42:45.839 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:45.839 "strip_size_kb": 0, 00:42:45.839 "state": "online", 00:42:45.839 "raid_level": "raid1", 00:42:45.839 "superblock": true, 00:42:45.839 "num_base_bdevs": 4, 00:42:45.839 "num_base_bdevs_discovered": 3, 00:42:45.839 "num_base_bdevs_operational": 3, 00:42:45.839 "process": { 00:42:45.839 "type": "rebuild", 00:42:45.839 "target": "spare", 00:42:45.839 "progress": { 00:42:45.839 "blocks": 14336, 00:42:45.839 "percent": 22 00:42:45.839 } 00:42:45.839 }, 00:42:45.839 "base_bdevs_list": [ 00:42:45.839 { 00:42:45.839 "name": "spare", 00:42:45.839 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:45.839 "is_configured": true, 00:42:45.839 "data_offset": 2048, 00:42:45.839 "data_size": 63488 00:42:45.839 }, 00:42:45.840 { 00:42:45.840 "name": null, 00:42:45.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:45.840 "is_configured": false, 00:42:45.840 "data_offset": 0, 00:42:45.840 "data_size": 63488 00:42:45.840 }, 00:42:45.840 { 00:42:45.840 "name": "BaseBdev3", 00:42:45.840 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:45.840 "is_configured": true, 00:42:45.840 "data_offset": 2048, 00:42:45.840 "data_size": 63488 00:42:45.840 }, 00:42:45.840 { 00:42:45.840 "name": "BaseBdev4", 00:42:45.840 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:45.840 "is_configured": true, 00:42:45.840 "data_offset": 2048, 00:42:45.840 "data_size": 63488 00:42:45.840 } 00:42:45.840 ] 00:42:45.840 }' 00:42:45.840 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:45.840 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:45.840 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:45.840 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:45.840 05:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:46.097 [2024-12-09 05:33:32.879999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:42:46.381 [2024-12-09 05:33:33.091061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:42:46.381 [2024-12-09 05:33:33.091517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:42:46.638 103.20 IOPS, 309.60 MiB/s [2024-12-09T05:33:33.610Z] [2024-12-09 05:33:33.424224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:42:46.638 [2024-12-09 05:33:33.424895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:42:46.638 [2024-12-09 05:33:33.554820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:46.896 "name": "raid_bdev1", 00:42:46.896 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:46.896 "strip_size_kb": 0, 00:42:46.896 "state": "online", 00:42:46.896 "raid_level": "raid1", 00:42:46.896 "superblock": true, 00:42:46.896 "num_base_bdevs": 4, 00:42:46.896 "num_base_bdevs_discovered": 3, 00:42:46.896 "num_base_bdevs_operational": 3, 00:42:46.896 "process": { 00:42:46.896 "type": "rebuild", 00:42:46.896 "target": "spare", 00:42:46.896 "progress": { 00:42:46.896 "blocks": 28672, 00:42:46.896 "percent": 45 00:42:46.896 } 00:42:46.896 }, 00:42:46.896 "base_bdevs_list": [ 00:42:46.896 { 00:42:46.896 "name": "spare", 00:42:46.896 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:46.896 "is_configured": true, 00:42:46.896 "data_offset": 2048, 00:42:46.896 "data_size": 63488 00:42:46.896 }, 00:42:46.896 { 00:42:46.896 "name": null, 00:42:46.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:46.896 "is_configured": false, 00:42:46.896 "data_offset": 0, 00:42:46.896 "data_size": 63488 00:42:46.896 }, 00:42:46.896 { 00:42:46.896 "name": "BaseBdev3", 00:42:46.896 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:46.896 "is_configured": true, 00:42:46.896 "data_offset": 2048, 00:42:46.896 "data_size": 63488 00:42:46.896 }, 00:42:46.896 { 00:42:46.896 "name": "BaseBdev4", 00:42:46.896 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:46.896 "is_configured": true, 00:42:46.896 "data_offset": 2048, 00:42:46.896 "data_size": 63488 00:42:46.896 } 00:42:46.896 ] 00:42:46.896 }' 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:46.896 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:47.153 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:47.153 05:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:47.412 [2024-12-09 05:33:34.256852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:42:47.670 94.50 IOPS, 283.50 MiB/s [2024-12-09T05:33:34.642Z] [2024-12-09 05:33:34.586168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:48.247 "name": "raid_bdev1", 00:42:48.247 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:48.247 "strip_size_kb": 0, 00:42:48.247 "state": "online", 00:42:48.247 "raid_level": "raid1", 00:42:48.247 "superblock": true, 00:42:48.247 "num_base_bdevs": 4, 00:42:48.247 "num_base_bdevs_discovered": 3, 00:42:48.247 "num_base_bdevs_operational": 3, 00:42:48.247 "process": { 00:42:48.247 "type": "rebuild", 00:42:48.247 "target": "spare", 00:42:48.247 "progress": { 00:42:48.247 "blocks": 49152, 00:42:48.247 "percent": 77 00:42:48.247 } 00:42:48.247 }, 00:42:48.247 "base_bdevs_list": [ 00:42:48.247 { 00:42:48.247 "name": "spare", 00:42:48.247 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:48.247 "is_configured": true, 00:42:48.247 "data_offset": 2048, 00:42:48.247 "data_size": 63488 00:42:48.247 }, 00:42:48.247 { 00:42:48.247 "name": null, 00:42:48.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:48.247 "is_configured": false, 00:42:48.247 "data_offset": 0, 00:42:48.247 "data_size": 63488 00:42:48.247 }, 00:42:48.247 { 00:42:48.247 "name": "BaseBdev3", 00:42:48.247 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:48.247 "is_configured": true, 00:42:48.247 "data_offset": 2048, 00:42:48.247 "data_size": 63488 00:42:48.247 }, 00:42:48.247 { 00:42:48.247 "name": "BaseBdev4", 00:42:48.247 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:48.247 "is_configured": true, 00:42:48.247 "data_offset": 2048, 00:42:48.247 "data_size": 63488 00:42:48.247 } 00:42:48.247 ] 00:42:48.247 }' 00:42:48.247 05:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:48.247 05:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:48.247 05:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:48.248 05:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:48.248 05:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:48.766 86.29 IOPS, 258.86 MiB/s [2024-12-09T05:33:35.738Z] [2024-12-09 05:33:35.682455] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:49.024 [2024-12-09 05:33:35.789313] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:49.024 [2024-12-09 05:33:35.792840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:49.283 "name": "raid_bdev1", 00:42:49.283 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:49.283 "strip_size_kb": 0, 00:42:49.283 "state": "online", 00:42:49.283 "raid_level": "raid1", 00:42:49.283 "superblock": true, 00:42:49.283 "num_base_bdevs": 4, 00:42:49.283 "num_base_bdevs_discovered": 3, 00:42:49.283 "num_base_bdevs_operational": 3, 00:42:49.283 "base_bdevs_list": [ 00:42:49.283 { 00:42:49.283 "name": "spare", 00:42:49.283 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:49.283 "is_configured": true, 00:42:49.283 "data_offset": 2048, 00:42:49.283 "data_size": 63488 00:42:49.283 }, 00:42:49.283 { 00:42:49.283 "name": null, 00:42:49.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:49.283 "is_configured": false, 00:42:49.283 "data_offset": 0, 00:42:49.283 "data_size": 63488 00:42:49.283 }, 00:42:49.283 { 00:42:49.283 "name": "BaseBdev3", 00:42:49.283 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:49.283 "is_configured": true, 00:42:49.283 "data_offset": 2048, 00:42:49.283 "data_size": 63488 00:42:49.283 }, 00:42:49.283 { 00:42:49.283 "name": "BaseBdev4", 00:42:49.283 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:49.283 "is_configured": true, 00:42:49.283 "data_offset": 2048, 00:42:49.283 "data_size": 63488 00:42:49.283 } 00:42:49.283 ] 00:42:49.283 }' 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:49.283 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.543 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:49.543 "name": "raid_bdev1", 00:42:49.543 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:49.543 "strip_size_kb": 0, 00:42:49.543 "state": "online", 00:42:49.543 "raid_level": "raid1", 00:42:49.543 "superblock": true, 00:42:49.543 "num_base_bdevs": 4, 00:42:49.544 "num_base_bdevs_discovered": 3, 00:42:49.544 "num_base_bdevs_operational": 3, 00:42:49.544 "base_bdevs_list": [ 00:42:49.544 { 00:42:49.544 "name": "spare", 00:42:49.544 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:49.544 "is_configured": true, 00:42:49.544 "data_offset": 2048, 00:42:49.544 "data_size": 63488 00:42:49.544 }, 00:42:49.544 { 00:42:49.544 "name": null, 00:42:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:49.544 "is_configured": false, 00:42:49.544 "data_offset": 0, 00:42:49.544 "data_size": 63488 00:42:49.544 }, 00:42:49.544 { 00:42:49.544 "name": "BaseBdev3", 00:42:49.544 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:49.544 "is_configured": true, 00:42:49.544 "data_offset": 2048, 00:42:49.544 "data_size": 63488 00:42:49.544 }, 00:42:49.544 { 00:42:49.544 "name": "BaseBdev4", 00:42:49.544 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:49.544 "is_configured": true, 00:42:49.544 "data_offset": 2048, 00:42:49.544 "data_size": 63488 00:42:49.544 } 00:42:49.544 ] 00:42:49.544 }' 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:49.544 79.50 IOPS, 238.50 MiB/s [2024-12-09T05:33:36.516Z] 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:49.544 "name": "raid_bdev1", 00:42:49.544 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:49.544 "strip_size_kb": 0, 00:42:49.544 "state": "online", 00:42:49.544 "raid_level": "raid1", 00:42:49.544 "superblock": true, 00:42:49.544 "num_base_bdevs": 4, 00:42:49.544 "num_base_bdevs_discovered": 3, 00:42:49.544 "num_base_bdevs_operational": 3, 00:42:49.544 "base_bdevs_list": [ 00:42:49.544 { 00:42:49.544 "name": "spare", 00:42:49.544 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:49.544 "is_configured": true, 00:42:49.544 "data_offset": 2048, 00:42:49.544 "data_size": 63488 00:42:49.544 }, 00:42:49.544 { 00:42:49.544 "name": null, 00:42:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:49.544 "is_configured": false, 00:42:49.544 "data_offset": 0, 00:42:49.544 "data_size": 63488 00:42:49.544 }, 00:42:49.544 { 00:42:49.544 "name": "BaseBdev3", 00:42:49.544 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:49.544 "is_configured": true, 00:42:49.544 "data_offset": 2048, 00:42:49.544 "data_size": 63488 00:42:49.544 }, 00:42:49.544 { 00:42:49.544 "name": "BaseBdev4", 00:42:49.544 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:49.544 "is_configured": true, 00:42:49.544 "data_offset": 2048, 00:42:49.544 "data_size": 63488 00:42:49.544 } 00:42:49.544 ] 00:42:49.544 }' 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:49.544 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:50.113 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:50.113 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.113 05:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:50.113 [2024-12-09 05:33:36.957066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:50.113 [2024-12-09 05:33:36.957452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:50.113 00:42:50.113 Latency(us) 00:42:50.113 [2024-12-09T05:33:37.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:50.113 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:42:50.113 raid_bdev1 : 8.65 76.27 228.82 0.00 0.00 18277.75 273.69 116296.61 00:42:50.113 [2024-12-09T05:33:37.085Z] =================================================================================================================== 00:42:50.113 [2024-12-09T05:33:37.085Z] Total : 76.27 228.82 0.00 0.00 18277.75 273.69 116296.61 00:42:50.113 [2024-12-09 05:33:37.029218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:50.113 [2024-12-09 05:33:37.029318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:50.113 [2024-12-09 05:33:37.029468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:50.113 [2024-12-09 05:33:37.029490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:50.113 { 00:42:50.113 "results": [ 00:42:50.113 { 00:42:50.113 "job": "raid_bdev1", 00:42:50.113 "core_mask": "0x1", 00:42:50.113 "workload": "randrw", 00:42:50.113 "percentage": 50, 00:42:50.113 "status": "finished", 00:42:50.113 "queue_depth": 2, 00:42:50.113 "io_size": 3145728, 00:42:50.113 "runtime": 8.653066, 00:42:50.113 "iops": 76.27354281129949, 00:42:50.113 "mibps": 228.8206284338985, 00:42:50.113 "io_failed": 0, 00:42:50.113 "io_timeout": 0, 00:42:50.113 "avg_latency_us": 18277.748363636365, 00:42:50.113 "min_latency_us": 273.6872727272727, 00:42:50.113 "max_latency_us": 116296.61090909092 00:42:50.113 } 00:42:50.113 ], 00:42:50.113 "core_count": 1 00:42:50.113 } 00:42:50.113 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.113 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:50.113 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:42:50.113 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.113 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:50.113 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:50.372 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:42:50.632 /dev/nbd0 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:50.632 1+0 records in 00:42:50.632 1+0 records out 00:42:50.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427391 s, 9.6 MB/s 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:50.632 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:42:50.891 /dev/nbd1 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:50.891 1+0 records in 00:42:50.891 1+0 records out 00:42:50.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828116 s, 4.9 MB/s 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:50.891 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:51.149 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:42:51.150 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:51.150 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:51.150 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:51.150 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:42:51.150 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:51.150 05:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:51.408 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:42:51.667 /dev/nbd1 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:51.667 1+0 records in 00:42:51.667 1+0 records out 00:42:51.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587975 s, 7.0 MB/s 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:51.667 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:51.926 05:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:52.185 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.443 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:52.443 [2024-12-09 05:33:39.360547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:52.443 [2024-12-09 05:33:39.360635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:52.443 [2024-12-09 05:33:39.360687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:42:52.443 [2024-12-09 05:33:39.360705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:52.443 [2024-12-09 05:33:39.364266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:52.443 [2024-12-09 05:33:39.364311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:52.443 [2024-12-09 05:33:39.364471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:52.444 [2024-12-09 05:33:39.364593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:52.444 [2024-12-09 05:33:39.364872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:52.444 [2024-12-09 05:33:39.365128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:52.444 spare 00:42:52.444 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.444 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:42:52.444 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.444 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:52.702 [2024-12-09 05:33:39.465296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:42:52.702 [2024-12-09 05:33:39.465356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:42:52.702 [2024-12-09 05:33:39.465863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:42:52.702 [2024-12-09 05:33:39.466151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:42:52.702 [2024-12-09 05:33:39.466181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:42:52.702 [2024-12-09 05:33:39.466446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:52.702 "name": "raid_bdev1", 00:42:52.702 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:52.702 "strip_size_kb": 0, 00:42:52.702 "state": "online", 00:42:52.702 "raid_level": "raid1", 00:42:52.702 "superblock": true, 00:42:52.702 "num_base_bdevs": 4, 00:42:52.702 "num_base_bdevs_discovered": 3, 00:42:52.702 "num_base_bdevs_operational": 3, 00:42:52.702 "base_bdevs_list": [ 00:42:52.702 { 00:42:52.702 "name": "spare", 00:42:52.702 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:52.702 "is_configured": true, 00:42:52.702 "data_offset": 2048, 00:42:52.702 "data_size": 63488 00:42:52.702 }, 00:42:52.702 { 00:42:52.702 "name": null, 00:42:52.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:52.702 "is_configured": false, 00:42:52.702 "data_offset": 2048, 00:42:52.702 "data_size": 63488 00:42:52.702 }, 00:42:52.702 { 00:42:52.702 "name": "BaseBdev3", 00:42:52.702 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:52.702 "is_configured": true, 00:42:52.702 "data_offset": 2048, 00:42:52.702 "data_size": 63488 00:42:52.702 }, 00:42:52.702 { 00:42:52.702 "name": "BaseBdev4", 00:42:52.702 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:52.702 "is_configured": true, 00:42:52.702 "data_offset": 2048, 00:42:52.702 "data_size": 63488 00:42:52.702 } 00:42:52.702 ] 00:42:52.702 }' 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:52.702 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.277 05:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.277 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:53.277 "name": "raid_bdev1", 00:42:53.277 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:53.277 "strip_size_kb": 0, 00:42:53.277 "state": "online", 00:42:53.277 "raid_level": "raid1", 00:42:53.277 "superblock": true, 00:42:53.277 "num_base_bdevs": 4, 00:42:53.277 "num_base_bdevs_discovered": 3, 00:42:53.277 "num_base_bdevs_operational": 3, 00:42:53.277 "base_bdevs_list": [ 00:42:53.277 { 00:42:53.277 "name": "spare", 00:42:53.277 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:53.277 "is_configured": true, 00:42:53.277 "data_offset": 2048, 00:42:53.277 "data_size": 63488 00:42:53.277 }, 00:42:53.277 { 00:42:53.277 "name": null, 00:42:53.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.277 "is_configured": false, 00:42:53.277 "data_offset": 2048, 00:42:53.277 "data_size": 63488 00:42:53.277 }, 00:42:53.277 { 00:42:53.277 "name": "BaseBdev3", 00:42:53.277 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:53.277 "is_configured": true, 00:42:53.277 "data_offset": 2048, 00:42:53.277 "data_size": 63488 00:42:53.277 }, 00:42:53.277 { 00:42:53.277 "name": "BaseBdev4", 00:42:53.277 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:53.277 "is_configured": true, 00:42:53.277 "data_offset": 2048, 00:42:53.277 "data_size": 63488 00:42:53.277 } 00:42:53.277 ] 00:42:53.277 }' 00:42:53.277 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:53.277 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:53.277 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:53.277 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.278 [2024-12-09 05:33:40.177445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:53.278 "name": "raid_bdev1", 00:42:53.278 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:53.278 "strip_size_kb": 0, 00:42:53.278 "state": "online", 00:42:53.278 "raid_level": "raid1", 00:42:53.278 "superblock": true, 00:42:53.278 "num_base_bdevs": 4, 00:42:53.278 "num_base_bdevs_discovered": 2, 00:42:53.278 "num_base_bdevs_operational": 2, 00:42:53.278 "base_bdevs_list": [ 00:42:53.278 { 00:42:53.278 "name": null, 00:42:53.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.278 "is_configured": false, 00:42:53.278 "data_offset": 0, 00:42:53.278 "data_size": 63488 00:42:53.278 }, 00:42:53.278 { 00:42:53.278 "name": null, 00:42:53.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.278 "is_configured": false, 00:42:53.278 "data_offset": 2048, 00:42:53.278 "data_size": 63488 00:42:53.278 }, 00:42:53.278 { 00:42:53.278 "name": "BaseBdev3", 00:42:53.278 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:53.278 "is_configured": true, 00:42:53.278 "data_offset": 2048, 00:42:53.278 "data_size": 63488 00:42:53.278 }, 00:42:53.278 { 00:42:53.278 "name": "BaseBdev4", 00:42:53.278 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:53.278 "is_configured": true, 00:42:53.278 "data_offset": 2048, 00:42:53.278 "data_size": 63488 00:42:53.278 } 00:42:53.278 ] 00:42:53.278 }' 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:53.278 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.865 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:53.865 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.865 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:53.865 [2024-12-09 05:33:40.653861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:53.865 [2024-12-09 05:33:40.654277] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:42:53.865 [2024-12-09 05:33:40.654300] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:53.865 [2024-12-09 05:33:40.654397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:53.865 [2024-12-09 05:33:40.669433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:42:53.865 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.865 05:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:42:53.865 [2024-12-09 05:33:40.672512] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.815 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:54.815 "name": "raid_bdev1", 00:42:54.815 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:54.815 "strip_size_kb": 0, 00:42:54.815 "state": "online", 00:42:54.815 "raid_level": "raid1", 00:42:54.815 "superblock": true, 00:42:54.815 "num_base_bdevs": 4, 00:42:54.815 "num_base_bdevs_discovered": 3, 00:42:54.815 "num_base_bdevs_operational": 3, 00:42:54.815 "process": { 00:42:54.815 "type": "rebuild", 00:42:54.815 "target": "spare", 00:42:54.815 "progress": { 00:42:54.815 "blocks": 20480, 00:42:54.815 "percent": 32 00:42:54.815 } 00:42:54.815 }, 00:42:54.815 "base_bdevs_list": [ 00:42:54.815 { 00:42:54.815 "name": "spare", 00:42:54.815 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:54.815 "is_configured": true, 00:42:54.815 "data_offset": 2048, 00:42:54.815 "data_size": 63488 00:42:54.815 }, 00:42:54.815 { 00:42:54.815 "name": null, 00:42:54.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:54.815 "is_configured": false, 00:42:54.815 "data_offset": 2048, 00:42:54.815 "data_size": 63488 00:42:54.815 }, 00:42:54.815 { 00:42:54.815 "name": "BaseBdev3", 00:42:54.815 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:54.815 "is_configured": true, 00:42:54.815 "data_offset": 2048, 00:42:54.815 "data_size": 63488 00:42:54.815 }, 00:42:54.816 { 00:42:54.816 "name": "BaseBdev4", 00:42:54.816 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:54.816 "is_configured": true, 00:42:54.816 "data_offset": 2048, 00:42:54.816 "data_size": 63488 00:42:54.816 } 00:42:54.816 ] 00:42:54.816 }' 00:42:54.816 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:54.816 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:54.816 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:55.075 [2024-12-09 05:33:41.826743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:55.075 [2024-12-09 05:33:41.882647] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:55.075 [2024-12-09 05:33:41.882970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:55.075 [2024-12-09 05:33:41.883018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:55.075 [2024-12-09 05:33:41.883034] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.075 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:55.075 "name": "raid_bdev1", 00:42:55.075 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:55.075 "strip_size_kb": 0, 00:42:55.075 "state": "online", 00:42:55.075 "raid_level": "raid1", 00:42:55.075 "superblock": true, 00:42:55.075 "num_base_bdevs": 4, 00:42:55.075 "num_base_bdevs_discovered": 2, 00:42:55.075 "num_base_bdevs_operational": 2, 00:42:55.075 "base_bdevs_list": [ 00:42:55.075 { 00:42:55.075 "name": null, 00:42:55.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:55.075 "is_configured": false, 00:42:55.075 "data_offset": 0, 00:42:55.075 "data_size": 63488 00:42:55.075 }, 00:42:55.075 { 00:42:55.075 "name": null, 00:42:55.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:55.075 "is_configured": false, 00:42:55.075 "data_offset": 2048, 00:42:55.075 "data_size": 63488 00:42:55.075 }, 00:42:55.075 { 00:42:55.075 "name": "BaseBdev3", 00:42:55.075 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:55.075 "is_configured": true, 00:42:55.075 "data_offset": 2048, 00:42:55.075 "data_size": 63488 00:42:55.075 }, 00:42:55.075 { 00:42:55.075 "name": "BaseBdev4", 00:42:55.075 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:55.075 "is_configured": true, 00:42:55.075 "data_offset": 2048, 00:42:55.076 "data_size": 63488 00:42:55.076 } 00:42:55.076 ] 00:42:55.076 }' 00:42:55.076 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:55.076 05:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:55.642 05:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:55.642 05:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.642 05:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:55.642 [2024-12-09 05:33:42.420282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:55.642 [2024-12-09 05:33:42.420395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:55.642 [2024-12-09 05:33:42.420444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:42:55.642 [2024-12-09 05:33:42.420462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:55.642 [2024-12-09 05:33:42.421244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:55.642 [2024-12-09 05:33:42.421291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:55.642 [2024-12-09 05:33:42.421443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:55.642 [2024-12-09 05:33:42.421468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:42:55.642 [2024-12-09 05:33:42.421510] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:55.642 [2024-12-09 05:33:42.421568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:55.642 [2024-12-09 05:33:42.435312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:42:55.642 spare 00:42:55.642 05:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.642 05:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:42:55.642 [2024-12-09 05:33:42.437931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:56.579 "name": "raid_bdev1", 00:42:56.579 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:56.579 "strip_size_kb": 0, 00:42:56.579 "state": "online", 00:42:56.579 "raid_level": "raid1", 00:42:56.579 "superblock": true, 00:42:56.579 "num_base_bdevs": 4, 00:42:56.579 "num_base_bdevs_discovered": 3, 00:42:56.579 "num_base_bdevs_operational": 3, 00:42:56.579 "process": { 00:42:56.579 "type": "rebuild", 00:42:56.579 "target": "spare", 00:42:56.579 "progress": { 00:42:56.579 "blocks": 20480, 00:42:56.579 "percent": 32 00:42:56.579 } 00:42:56.579 }, 00:42:56.579 "base_bdevs_list": [ 00:42:56.579 { 00:42:56.579 "name": "spare", 00:42:56.579 "uuid": "02b0e00f-ca00-5f89-87d8-2d8a76251c21", 00:42:56.579 "is_configured": true, 00:42:56.579 "data_offset": 2048, 00:42:56.579 "data_size": 63488 00:42:56.579 }, 00:42:56.579 { 00:42:56.579 "name": null, 00:42:56.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:56.579 "is_configured": false, 00:42:56.579 "data_offset": 2048, 00:42:56.579 "data_size": 63488 00:42:56.579 }, 00:42:56.579 { 00:42:56.579 "name": "BaseBdev3", 00:42:56.579 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:56.579 "is_configured": true, 00:42:56.579 "data_offset": 2048, 00:42:56.579 "data_size": 63488 00:42:56.579 }, 00:42:56.579 { 00:42:56.579 "name": "BaseBdev4", 00:42:56.579 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:56.579 "is_configured": true, 00:42:56.579 "data_offset": 2048, 00:42:56.579 "data_size": 63488 00:42:56.579 } 00:42:56.579 ] 00:42:56.579 }' 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:56.579 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:56.839 [2024-12-09 05:33:43.604019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:56.839 [2024-12-09 05:33:43.647879] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:56.839 [2024-12-09 05:33:43.648014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:56.839 [2024-12-09 05:33:43.648042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:56.839 [2024-12-09 05:33:43.648059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:56.839 "name": "raid_bdev1", 00:42:56.839 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:56.839 "strip_size_kb": 0, 00:42:56.839 "state": "online", 00:42:56.839 "raid_level": "raid1", 00:42:56.839 "superblock": true, 00:42:56.839 "num_base_bdevs": 4, 00:42:56.839 "num_base_bdevs_discovered": 2, 00:42:56.839 "num_base_bdevs_operational": 2, 00:42:56.839 "base_bdevs_list": [ 00:42:56.839 { 00:42:56.839 "name": null, 00:42:56.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:56.839 "is_configured": false, 00:42:56.839 "data_offset": 0, 00:42:56.839 "data_size": 63488 00:42:56.839 }, 00:42:56.839 { 00:42:56.839 "name": null, 00:42:56.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:56.839 "is_configured": false, 00:42:56.839 "data_offset": 2048, 00:42:56.839 "data_size": 63488 00:42:56.839 }, 00:42:56.839 { 00:42:56.839 "name": "BaseBdev3", 00:42:56.839 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:56.839 "is_configured": true, 00:42:56.839 "data_offset": 2048, 00:42:56.839 "data_size": 63488 00:42:56.839 }, 00:42:56.839 { 00:42:56.839 "name": "BaseBdev4", 00:42:56.839 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:56.839 "is_configured": true, 00:42:56.839 "data_offset": 2048, 00:42:56.839 "data_size": 63488 00:42:56.839 } 00:42:56.839 ] 00:42:56.839 }' 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:56.839 05:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:57.407 "name": "raid_bdev1", 00:42:57.407 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:57.407 "strip_size_kb": 0, 00:42:57.407 "state": "online", 00:42:57.407 "raid_level": "raid1", 00:42:57.407 "superblock": true, 00:42:57.407 "num_base_bdevs": 4, 00:42:57.407 "num_base_bdevs_discovered": 2, 00:42:57.407 "num_base_bdevs_operational": 2, 00:42:57.407 "base_bdevs_list": [ 00:42:57.407 { 00:42:57.407 "name": null, 00:42:57.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.407 "is_configured": false, 00:42:57.407 "data_offset": 0, 00:42:57.407 "data_size": 63488 00:42:57.407 }, 00:42:57.407 { 00:42:57.407 "name": null, 00:42:57.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.407 "is_configured": false, 00:42:57.407 "data_offset": 2048, 00:42:57.407 "data_size": 63488 00:42:57.407 }, 00:42:57.407 { 00:42:57.407 "name": "BaseBdev3", 00:42:57.407 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:57.407 "is_configured": true, 00:42:57.407 "data_offset": 2048, 00:42:57.407 "data_size": 63488 00:42:57.407 }, 00:42:57.407 { 00:42:57.407 "name": "BaseBdev4", 00:42:57.407 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:57.407 "is_configured": true, 00:42:57.407 "data_offset": 2048, 00:42:57.407 "data_size": 63488 00:42:57.407 } 00:42:57.407 ] 00:42:57.407 }' 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:57.407 [2024-12-09 05:33:44.364300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:57.407 [2024-12-09 05:33:44.364424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:57.407 [2024-12-09 05:33:44.364456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:42:57.407 [2024-12-09 05:33:44.364475] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:57.407 [2024-12-09 05:33:44.365119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:57.407 [2024-12-09 05:33:44.365163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:57.407 [2024-12-09 05:33:44.365282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:57.407 [2024-12-09 05:33:44.365321] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:42:57.407 [2024-12-09 05:33:44.365334] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:57.407 [2024-12-09 05:33:44.365354] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:42:57.407 BaseBdev1 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.407 05:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:58.791 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:58.792 "name": "raid_bdev1", 00:42:58.792 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:58.792 "strip_size_kb": 0, 00:42:58.792 "state": "online", 00:42:58.792 "raid_level": "raid1", 00:42:58.792 "superblock": true, 00:42:58.792 "num_base_bdevs": 4, 00:42:58.792 "num_base_bdevs_discovered": 2, 00:42:58.792 "num_base_bdevs_operational": 2, 00:42:58.792 "base_bdevs_list": [ 00:42:58.792 { 00:42:58.792 "name": null, 00:42:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:58.792 "is_configured": false, 00:42:58.792 "data_offset": 0, 00:42:58.792 "data_size": 63488 00:42:58.792 }, 00:42:58.792 { 00:42:58.792 "name": null, 00:42:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:58.792 "is_configured": false, 00:42:58.792 "data_offset": 2048, 00:42:58.792 "data_size": 63488 00:42:58.792 }, 00:42:58.792 { 00:42:58.792 "name": "BaseBdev3", 00:42:58.792 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:58.792 "is_configured": true, 00:42:58.792 "data_offset": 2048, 00:42:58.792 "data_size": 63488 00:42:58.792 }, 00:42:58.792 { 00:42:58.792 "name": "BaseBdev4", 00:42:58.792 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:58.792 "is_configured": true, 00:42:58.792 "data_offset": 2048, 00:42:58.792 "data_size": 63488 00:42:58.792 } 00:42:58.792 ] 00:42:58.792 }' 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:58.792 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:59.050 "name": "raid_bdev1", 00:42:59.050 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:42:59.050 "strip_size_kb": 0, 00:42:59.050 "state": "online", 00:42:59.050 "raid_level": "raid1", 00:42:59.050 "superblock": true, 00:42:59.050 "num_base_bdevs": 4, 00:42:59.050 "num_base_bdevs_discovered": 2, 00:42:59.050 "num_base_bdevs_operational": 2, 00:42:59.050 "base_bdevs_list": [ 00:42:59.050 { 00:42:59.050 "name": null, 00:42:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:59.050 "is_configured": false, 00:42:59.050 "data_offset": 0, 00:42:59.050 "data_size": 63488 00:42:59.050 }, 00:42:59.050 { 00:42:59.050 "name": null, 00:42:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:59.050 "is_configured": false, 00:42:59.050 "data_offset": 2048, 00:42:59.050 "data_size": 63488 00:42:59.050 }, 00:42:59.050 { 00:42:59.050 "name": "BaseBdev3", 00:42:59.050 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:42:59.050 "is_configured": true, 00:42:59.050 "data_offset": 2048, 00:42:59.050 "data_size": 63488 00:42:59.050 }, 00:42:59.050 { 00:42:59.050 "name": "BaseBdev4", 00:42:59.050 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:42:59.050 "is_configured": true, 00:42:59.050 "data_offset": 2048, 00:42:59.050 "data_size": 63488 00:42:59.050 } 00:42:59.050 ] 00:42:59.050 }' 00:42:59.050 05:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:42:59.335 [2024-12-09 05:33:46.090561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:59.335 [2024-12-09 05:33:46.090874] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:42:59.335 [2024-12-09 05:33:46.090902] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:59.335 request: 00:42:59.335 { 00:42:59.335 "base_bdev": "BaseBdev1", 00:42:59.335 "raid_bdev": "raid_bdev1", 00:42:59.335 "method": "bdev_raid_add_base_bdev", 00:42:59.335 "req_id": 1 00:42:59.335 } 00:42:59.335 Got JSON-RPC error response 00:42:59.335 response: 00:42:59.335 { 00:42:59.335 "code": -22, 00:42:59.335 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:59.335 } 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:59.335 05:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:00.286 "name": "raid_bdev1", 00:43:00.286 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:43:00.286 "strip_size_kb": 0, 00:43:00.286 "state": "online", 00:43:00.286 "raid_level": "raid1", 00:43:00.286 "superblock": true, 00:43:00.286 "num_base_bdevs": 4, 00:43:00.286 "num_base_bdevs_discovered": 2, 00:43:00.286 "num_base_bdevs_operational": 2, 00:43:00.286 "base_bdevs_list": [ 00:43:00.286 { 00:43:00.286 "name": null, 00:43:00.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.286 "is_configured": false, 00:43:00.286 "data_offset": 0, 00:43:00.286 "data_size": 63488 00:43:00.286 }, 00:43:00.286 { 00:43:00.286 "name": null, 00:43:00.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.286 "is_configured": false, 00:43:00.286 "data_offset": 2048, 00:43:00.286 "data_size": 63488 00:43:00.286 }, 00:43:00.286 { 00:43:00.286 "name": "BaseBdev3", 00:43:00.286 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:43:00.286 "is_configured": true, 00:43:00.286 "data_offset": 2048, 00:43:00.286 "data_size": 63488 00:43:00.286 }, 00:43:00.286 { 00:43:00.286 "name": "BaseBdev4", 00:43:00.286 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:43:00.286 "is_configured": true, 00:43:00.286 "data_offset": 2048, 00:43:00.286 "data_size": 63488 00:43:00.286 } 00:43:00.286 ] 00:43:00.286 }' 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:00.286 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.853 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:00.853 "name": "raid_bdev1", 00:43:00.853 "uuid": "98f4aad2-8368-40e9-985c-e3bb6cd63c86", 00:43:00.853 "strip_size_kb": 0, 00:43:00.853 "state": "online", 00:43:00.853 "raid_level": "raid1", 00:43:00.853 "superblock": true, 00:43:00.853 "num_base_bdevs": 4, 00:43:00.853 "num_base_bdevs_discovered": 2, 00:43:00.853 "num_base_bdevs_operational": 2, 00:43:00.853 "base_bdevs_list": [ 00:43:00.853 { 00:43:00.853 "name": null, 00:43:00.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.853 "is_configured": false, 00:43:00.853 "data_offset": 0, 00:43:00.853 "data_size": 63488 00:43:00.853 }, 00:43:00.853 { 00:43:00.853 "name": null, 00:43:00.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.853 "is_configured": false, 00:43:00.853 "data_offset": 2048, 00:43:00.853 "data_size": 63488 00:43:00.853 }, 00:43:00.853 { 00:43:00.853 "name": "BaseBdev3", 00:43:00.853 "uuid": "6719fdac-bb6b-5be6-b1b4-69b9769ce534", 00:43:00.853 "is_configured": true, 00:43:00.853 "data_offset": 2048, 00:43:00.853 "data_size": 63488 00:43:00.853 }, 00:43:00.853 { 00:43:00.853 "name": "BaseBdev4", 00:43:00.853 "uuid": "19659a15-3e79-5326-84bb-0984d25e2aa7", 00:43:00.853 "is_configured": true, 00:43:00.854 "data_offset": 2048, 00:43:00.854 "data_size": 63488 00:43:00.854 } 00:43:00.854 ] 00:43:00.854 }' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79601 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79601 ']' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79601 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79601 00:43:00.854 killing process with pid 79601 00:43:00.854 Received shutdown signal, test time was about 19.453774 seconds 00:43:00.854 00:43:00.854 Latency(us) 00:43:00.854 [2024-12-09T05:33:47.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:00.854 [2024-12-09T05:33:47.826Z] =================================================================================================================== 00:43:00.854 [2024-12-09T05:33:47.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79601' 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79601 00:43:00.854 [2024-12-09 05:33:47.809618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:00.854 05:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79601 00:43:00.854 [2024-12-09 05:33:47.809810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:00.854 [2024-12-09 05:33:47.809926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:00.854 [2024-12-09 05:33:47.809944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:43:01.422 [2024-12-09 05:33:48.182203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:02.798 05:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:43:02.798 00:43:02.798 real 0m23.230s 00:43:02.798 user 0m31.435s 00:43:02.798 sys 0m2.626s 00:43:02.798 05:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:02.798 ************************************ 00:43:02.798 END TEST raid_rebuild_test_sb_io 00:43:02.798 ************************************ 00:43:02.798 05:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:43:02.798 05:33:49 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:43:02.798 05:33:49 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:43:02.798 05:33:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:02.798 05:33:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:02.798 05:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:02.798 ************************************ 00:43:02.798 START TEST raid5f_state_function_test 00:43:02.798 ************************************ 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80335 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80335' 00:43:02.798 Process raid pid: 80335 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80335 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80335 ']' 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:02.798 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:02.798 [2024-12-09 05:33:49.551608] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:43:02.798 [2024-12-09 05:33:49.551819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:02.798 [2024-12-09 05:33:49.742157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.055 [2024-12-09 05:33:49.876942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.313 [2024-12-09 05:33:50.079432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:03.313 [2024-12-09 05:33:50.079764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:03.878 [2024-12-09 05:33:50.583579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:03.878 [2024-12-09 05:33:50.583665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:03.878 [2024-12-09 05:33:50.583681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:03.878 [2024-12-09 05:33:50.583696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:03.878 [2024-12-09 05:33:50.583705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:03.878 [2024-12-09 05:33:50.583719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.878 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.879 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:03.879 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:03.879 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.879 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:03.879 "name": "Existed_Raid", 00:43:03.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:03.879 "strip_size_kb": 64, 00:43:03.879 "state": "configuring", 00:43:03.879 "raid_level": "raid5f", 00:43:03.879 "superblock": false, 00:43:03.879 "num_base_bdevs": 3, 00:43:03.879 "num_base_bdevs_discovered": 0, 00:43:03.879 "num_base_bdevs_operational": 3, 00:43:03.879 "base_bdevs_list": [ 00:43:03.879 { 00:43:03.879 "name": "BaseBdev1", 00:43:03.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:03.879 "is_configured": false, 00:43:03.879 "data_offset": 0, 00:43:03.879 "data_size": 0 00:43:03.879 }, 00:43:03.879 { 00:43:03.879 "name": "BaseBdev2", 00:43:03.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:03.879 "is_configured": false, 00:43:03.879 "data_offset": 0, 00:43:03.879 "data_size": 0 00:43:03.879 }, 00:43:03.879 { 00:43:03.879 "name": "BaseBdev3", 00:43:03.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:03.879 "is_configured": false, 00:43:03.879 "data_offset": 0, 00:43:03.879 "data_size": 0 00:43:03.879 } 00:43:03.879 ] 00:43:03.879 }' 00:43:03.879 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:03.879 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.136 [2024-12-09 05:33:51.091678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:04.136 [2024-12-09 05:33:51.091935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.136 [2024-12-09 05:33:51.099661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:04.136 [2024-12-09 05:33:51.099892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:04.136 [2024-12-09 05:33:51.099920] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:04.136 [2024-12-09 05:33:51.099939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:04.136 [2024-12-09 05:33:51.099949] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:04.136 [2024-12-09 05:33:51.099963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.136 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.395 [2024-12-09 05:33:51.143198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:04.395 BaseBdev1 00:43:04.395 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.396 [ 00:43:04.396 { 00:43:04.396 "name": "BaseBdev1", 00:43:04.396 "aliases": [ 00:43:04.396 "10ee5f13-7f85-4ec9-999d-2eeef48d4229" 00:43:04.396 ], 00:43:04.396 "product_name": "Malloc disk", 00:43:04.396 "block_size": 512, 00:43:04.396 "num_blocks": 65536, 00:43:04.396 "uuid": "10ee5f13-7f85-4ec9-999d-2eeef48d4229", 00:43:04.396 "assigned_rate_limits": { 00:43:04.396 "rw_ios_per_sec": 0, 00:43:04.396 "rw_mbytes_per_sec": 0, 00:43:04.396 "r_mbytes_per_sec": 0, 00:43:04.396 "w_mbytes_per_sec": 0 00:43:04.396 }, 00:43:04.396 "claimed": true, 00:43:04.396 "claim_type": "exclusive_write", 00:43:04.396 "zoned": false, 00:43:04.396 "supported_io_types": { 00:43:04.396 "read": true, 00:43:04.396 "write": true, 00:43:04.396 "unmap": true, 00:43:04.396 "flush": true, 00:43:04.396 "reset": true, 00:43:04.396 "nvme_admin": false, 00:43:04.396 "nvme_io": false, 00:43:04.396 "nvme_io_md": false, 00:43:04.396 "write_zeroes": true, 00:43:04.396 "zcopy": true, 00:43:04.396 "get_zone_info": false, 00:43:04.396 "zone_management": false, 00:43:04.396 "zone_append": false, 00:43:04.396 "compare": false, 00:43:04.396 "compare_and_write": false, 00:43:04.396 "abort": true, 00:43:04.396 "seek_hole": false, 00:43:04.396 "seek_data": false, 00:43:04.396 "copy": true, 00:43:04.396 "nvme_iov_md": false 00:43:04.396 }, 00:43:04.396 "memory_domains": [ 00:43:04.396 { 00:43:04.396 "dma_device_id": "system", 00:43:04.396 "dma_device_type": 1 00:43:04.396 }, 00:43:04.396 { 00:43:04.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:04.396 "dma_device_type": 2 00:43:04.396 } 00:43:04.396 ], 00:43:04.396 "driver_specific": {} 00:43:04.396 } 00:43:04.396 ] 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:04.396 "name": "Existed_Raid", 00:43:04.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.396 "strip_size_kb": 64, 00:43:04.396 "state": "configuring", 00:43:04.396 "raid_level": "raid5f", 00:43:04.396 "superblock": false, 00:43:04.396 "num_base_bdevs": 3, 00:43:04.396 "num_base_bdevs_discovered": 1, 00:43:04.396 "num_base_bdevs_operational": 3, 00:43:04.396 "base_bdevs_list": [ 00:43:04.396 { 00:43:04.396 "name": "BaseBdev1", 00:43:04.396 "uuid": "10ee5f13-7f85-4ec9-999d-2eeef48d4229", 00:43:04.396 "is_configured": true, 00:43:04.396 "data_offset": 0, 00:43:04.396 "data_size": 65536 00:43:04.396 }, 00:43:04.396 { 00:43:04.396 "name": "BaseBdev2", 00:43:04.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.396 "is_configured": false, 00:43:04.396 "data_offset": 0, 00:43:04.396 "data_size": 0 00:43:04.396 }, 00:43:04.396 { 00:43:04.396 "name": "BaseBdev3", 00:43:04.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.396 "is_configured": false, 00:43:04.396 "data_offset": 0, 00:43:04.396 "data_size": 0 00:43:04.396 } 00:43:04.396 ] 00:43:04.396 }' 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:04.396 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.989 [2024-12-09 05:33:51.691470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:04.989 [2024-12-09 05:33:51.691538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.989 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.989 [2024-12-09 05:33:51.699558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:04.989 [2024-12-09 05:33:51.702283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:04.989 [2024-12-09 05:33:51.702339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:04.989 [2024-12-09 05:33:51.702356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:04.989 [2024-12-09 05:33:51.702373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:04.990 "name": "Existed_Raid", 00:43:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.990 "strip_size_kb": 64, 00:43:04.990 "state": "configuring", 00:43:04.990 "raid_level": "raid5f", 00:43:04.990 "superblock": false, 00:43:04.990 "num_base_bdevs": 3, 00:43:04.990 "num_base_bdevs_discovered": 1, 00:43:04.990 "num_base_bdevs_operational": 3, 00:43:04.990 "base_bdevs_list": [ 00:43:04.990 { 00:43:04.990 "name": "BaseBdev1", 00:43:04.990 "uuid": "10ee5f13-7f85-4ec9-999d-2eeef48d4229", 00:43:04.990 "is_configured": true, 00:43:04.990 "data_offset": 0, 00:43:04.990 "data_size": 65536 00:43:04.990 }, 00:43:04.990 { 00:43:04.990 "name": "BaseBdev2", 00:43:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.990 "is_configured": false, 00:43:04.990 "data_offset": 0, 00:43:04.990 "data_size": 0 00:43:04.990 }, 00:43:04.990 { 00:43:04.990 "name": "BaseBdev3", 00:43:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.990 "is_configured": false, 00:43:04.990 "data_offset": 0, 00:43:04.990 "data_size": 0 00:43:04.990 } 00:43:04.990 ] 00:43:04.990 }' 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:04.990 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:05.555 [2024-12-09 05:33:52.280419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:05.555 BaseBdev2 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:05.555 [ 00:43:05.555 { 00:43:05.555 "name": "BaseBdev2", 00:43:05.555 "aliases": [ 00:43:05.555 "407b7c82-aafa-46f4-b58d-a39cc4c226be" 00:43:05.555 ], 00:43:05.555 "product_name": "Malloc disk", 00:43:05.555 "block_size": 512, 00:43:05.555 "num_blocks": 65536, 00:43:05.555 "uuid": "407b7c82-aafa-46f4-b58d-a39cc4c226be", 00:43:05.555 "assigned_rate_limits": { 00:43:05.555 "rw_ios_per_sec": 0, 00:43:05.555 "rw_mbytes_per_sec": 0, 00:43:05.555 "r_mbytes_per_sec": 0, 00:43:05.555 "w_mbytes_per_sec": 0 00:43:05.555 }, 00:43:05.555 "claimed": true, 00:43:05.555 "claim_type": "exclusive_write", 00:43:05.555 "zoned": false, 00:43:05.555 "supported_io_types": { 00:43:05.555 "read": true, 00:43:05.555 "write": true, 00:43:05.555 "unmap": true, 00:43:05.555 "flush": true, 00:43:05.555 "reset": true, 00:43:05.555 "nvme_admin": false, 00:43:05.555 "nvme_io": false, 00:43:05.555 "nvme_io_md": false, 00:43:05.555 "write_zeroes": true, 00:43:05.555 "zcopy": true, 00:43:05.555 "get_zone_info": false, 00:43:05.555 "zone_management": false, 00:43:05.555 "zone_append": false, 00:43:05.555 "compare": false, 00:43:05.555 "compare_and_write": false, 00:43:05.555 "abort": true, 00:43:05.555 "seek_hole": false, 00:43:05.555 "seek_data": false, 00:43:05.555 "copy": true, 00:43:05.555 "nvme_iov_md": false 00:43:05.555 }, 00:43:05.555 "memory_domains": [ 00:43:05.555 { 00:43:05.555 "dma_device_id": "system", 00:43:05.555 "dma_device_type": 1 00:43:05.555 }, 00:43:05.555 { 00:43:05.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:05.555 "dma_device_type": 2 00:43:05.555 } 00:43:05.555 ], 00:43:05.555 "driver_specific": {} 00:43:05.555 } 00:43:05.555 ] 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:05.555 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.556 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:05.556 "name": "Existed_Raid", 00:43:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:05.556 "strip_size_kb": 64, 00:43:05.556 "state": "configuring", 00:43:05.556 "raid_level": "raid5f", 00:43:05.556 "superblock": false, 00:43:05.556 "num_base_bdevs": 3, 00:43:05.556 "num_base_bdevs_discovered": 2, 00:43:05.556 "num_base_bdevs_operational": 3, 00:43:05.556 "base_bdevs_list": [ 00:43:05.556 { 00:43:05.556 "name": "BaseBdev1", 00:43:05.556 "uuid": "10ee5f13-7f85-4ec9-999d-2eeef48d4229", 00:43:05.556 "is_configured": true, 00:43:05.556 "data_offset": 0, 00:43:05.556 "data_size": 65536 00:43:05.556 }, 00:43:05.556 { 00:43:05.556 "name": "BaseBdev2", 00:43:05.556 "uuid": "407b7c82-aafa-46f4-b58d-a39cc4c226be", 00:43:05.556 "is_configured": true, 00:43:05.556 "data_offset": 0, 00:43:05.556 "data_size": 65536 00:43:05.556 }, 00:43:05.556 { 00:43:05.556 "name": "BaseBdev3", 00:43:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:05.556 "is_configured": false, 00:43:05.556 "data_offset": 0, 00:43:05.556 "data_size": 0 00:43:05.556 } 00:43:05.556 ] 00:43:05.556 }' 00:43:05.556 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:05.556 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.121 [2024-12-09 05:33:52.880821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:06.121 [2024-12-09 05:33:52.880947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:06.121 [2024-12-09 05:33:52.880976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:43:06.121 [2024-12-09 05:33:52.881379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:06.121 [2024-12-09 05:33:52.886773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:06.121 [2024-12-09 05:33:52.886811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:43:06.121 [2024-12-09 05:33:52.887189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:06.121 BaseBdev3 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.121 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.121 [ 00:43:06.121 { 00:43:06.121 "name": "BaseBdev3", 00:43:06.121 "aliases": [ 00:43:06.121 "3efcfd3c-0b27-4480-978a-f9ec15fd86bc" 00:43:06.121 ], 00:43:06.121 "product_name": "Malloc disk", 00:43:06.121 "block_size": 512, 00:43:06.121 "num_blocks": 65536, 00:43:06.121 "uuid": "3efcfd3c-0b27-4480-978a-f9ec15fd86bc", 00:43:06.121 "assigned_rate_limits": { 00:43:06.121 "rw_ios_per_sec": 0, 00:43:06.122 "rw_mbytes_per_sec": 0, 00:43:06.122 "r_mbytes_per_sec": 0, 00:43:06.122 "w_mbytes_per_sec": 0 00:43:06.122 }, 00:43:06.122 "claimed": true, 00:43:06.122 "claim_type": "exclusive_write", 00:43:06.122 "zoned": false, 00:43:06.122 "supported_io_types": { 00:43:06.122 "read": true, 00:43:06.122 "write": true, 00:43:06.122 "unmap": true, 00:43:06.122 "flush": true, 00:43:06.122 "reset": true, 00:43:06.122 "nvme_admin": false, 00:43:06.122 "nvme_io": false, 00:43:06.122 "nvme_io_md": false, 00:43:06.122 "write_zeroes": true, 00:43:06.122 "zcopy": true, 00:43:06.122 "get_zone_info": false, 00:43:06.122 "zone_management": false, 00:43:06.122 "zone_append": false, 00:43:06.122 "compare": false, 00:43:06.122 "compare_and_write": false, 00:43:06.122 "abort": true, 00:43:06.122 "seek_hole": false, 00:43:06.122 "seek_data": false, 00:43:06.122 "copy": true, 00:43:06.122 "nvme_iov_md": false 00:43:06.122 }, 00:43:06.122 "memory_domains": [ 00:43:06.122 { 00:43:06.122 "dma_device_id": "system", 00:43:06.122 "dma_device_type": 1 00:43:06.122 }, 00:43:06.122 { 00:43:06.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:06.122 "dma_device_type": 2 00:43:06.122 } 00:43:06.122 ], 00:43:06.122 "driver_specific": {} 00:43:06.122 } 00:43:06.122 ] 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:06.122 "name": "Existed_Raid", 00:43:06.122 "uuid": "af930a22-7070-4bbb-bb6b-b44b23edbe61", 00:43:06.122 "strip_size_kb": 64, 00:43:06.122 "state": "online", 00:43:06.122 "raid_level": "raid5f", 00:43:06.122 "superblock": false, 00:43:06.122 "num_base_bdevs": 3, 00:43:06.122 "num_base_bdevs_discovered": 3, 00:43:06.122 "num_base_bdevs_operational": 3, 00:43:06.122 "base_bdevs_list": [ 00:43:06.122 { 00:43:06.122 "name": "BaseBdev1", 00:43:06.122 "uuid": "10ee5f13-7f85-4ec9-999d-2eeef48d4229", 00:43:06.122 "is_configured": true, 00:43:06.122 "data_offset": 0, 00:43:06.122 "data_size": 65536 00:43:06.122 }, 00:43:06.122 { 00:43:06.122 "name": "BaseBdev2", 00:43:06.122 "uuid": "407b7c82-aafa-46f4-b58d-a39cc4c226be", 00:43:06.122 "is_configured": true, 00:43:06.122 "data_offset": 0, 00:43:06.122 "data_size": 65536 00:43:06.122 }, 00:43:06.122 { 00:43:06.122 "name": "BaseBdev3", 00:43:06.122 "uuid": "3efcfd3c-0b27-4480-978a-f9ec15fd86bc", 00:43:06.122 "is_configured": true, 00:43:06.122 "data_offset": 0, 00:43:06.122 "data_size": 65536 00:43:06.122 } 00:43:06.122 ] 00:43:06.122 }' 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:06.122 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.687 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.687 [2024-12-09 05:33:53.457375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:06.688 "name": "Existed_Raid", 00:43:06.688 "aliases": [ 00:43:06.688 "af930a22-7070-4bbb-bb6b-b44b23edbe61" 00:43:06.688 ], 00:43:06.688 "product_name": "Raid Volume", 00:43:06.688 "block_size": 512, 00:43:06.688 "num_blocks": 131072, 00:43:06.688 "uuid": "af930a22-7070-4bbb-bb6b-b44b23edbe61", 00:43:06.688 "assigned_rate_limits": { 00:43:06.688 "rw_ios_per_sec": 0, 00:43:06.688 "rw_mbytes_per_sec": 0, 00:43:06.688 "r_mbytes_per_sec": 0, 00:43:06.688 "w_mbytes_per_sec": 0 00:43:06.688 }, 00:43:06.688 "claimed": false, 00:43:06.688 "zoned": false, 00:43:06.688 "supported_io_types": { 00:43:06.688 "read": true, 00:43:06.688 "write": true, 00:43:06.688 "unmap": false, 00:43:06.688 "flush": false, 00:43:06.688 "reset": true, 00:43:06.688 "nvme_admin": false, 00:43:06.688 "nvme_io": false, 00:43:06.688 "nvme_io_md": false, 00:43:06.688 "write_zeroes": true, 00:43:06.688 "zcopy": false, 00:43:06.688 "get_zone_info": false, 00:43:06.688 "zone_management": false, 00:43:06.688 "zone_append": false, 00:43:06.688 "compare": false, 00:43:06.688 "compare_and_write": false, 00:43:06.688 "abort": false, 00:43:06.688 "seek_hole": false, 00:43:06.688 "seek_data": false, 00:43:06.688 "copy": false, 00:43:06.688 "nvme_iov_md": false 00:43:06.688 }, 00:43:06.688 "driver_specific": { 00:43:06.688 "raid": { 00:43:06.688 "uuid": "af930a22-7070-4bbb-bb6b-b44b23edbe61", 00:43:06.688 "strip_size_kb": 64, 00:43:06.688 "state": "online", 00:43:06.688 "raid_level": "raid5f", 00:43:06.688 "superblock": false, 00:43:06.688 "num_base_bdevs": 3, 00:43:06.688 "num_base_bdevs_discovered": 3, 00:43:06.688 "num_base_bdevs_operational": 3, 00:43:06.688 "base_bdevs_list": [ 00:43:06.688 { 00:43:06.688 "name": "BaseBdev1", 00:43:06.688 "uuid": "10ee5f13-7f85-4ec9-999d-2eeef48d4229", 00:43:06.688 "is_configured": true, 00:43:06.688 "data_offset": 0, 00:43:06.688 "data_size": 65536 00:43:06.688 }, 00:43:06.688 { 00:43:06.688 "name": "BaseBdev2", 00:43:06.688 "uuid": "407b7c82-aafa-46f4-b58d-a39cc4c226be", 00:43:06.688 "is_configured": true, 00:43:06.688 "data_offset": 0, 00:43:06.688 "data_size": 65536 00:43:06.688 }, 00:43:06.688 { 00:43:06.688 "name": "BaseBdev3", 00:43:06.688 "uuid": "3efcfd3c-0b27-4480-978a-f9ec15fd86bc", 00:43:06.688 "is_configured": true, 00:43:06.688 "data_offset": 0, 00:43:06.688 "data_size": 65536 00:43:06.688 } 00:43:06.688 ] 00:43:06.688 } 00:43:06.688 } 00:43:06.688 }' 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:43:06.688 BaseBdev2 00:43:06.688 BaseBdev3' 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:06.688 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.946 [2024-12-09 05:33:53.793219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:06.946 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.204 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:07.204 "name": "Existed_Raid", 00:43:07.204 "uuid": "af930a22-7070-4bbb-bb6b-b44b23edbe61", 00:43:07.204 "strip_size_kb": 64, 00:43:07.204 "state": "online", 00:43:07.204 "raid_level": "raid5f", 00:43:07.204 "superblock": false, 00:43:07.204 "num_base_bdevs": 3, 00:43:07.204 "num_base_bdevs_discovered": 2, 00:43:07.204 "num_base_bdevs_operational": 2, 00:43:07.204 "base_bdevs_list": [ 00:43:07.204 { 00:43:07.204 "name": null, 00:43:07.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:07.204 "is_configured": false, 00:43:07.204 "data_offset": 0, 00:43:07.204 "data_size": 65536 00:43:07.204 }, 00:43:07.204 { 00:43:07.204 "name": "BaseBdev2", 00:43:07.204 "uuid": "407b7c82-aafa-46f4-b58d-a39cc4c226be", 00:43:07.204 "is_configured": true, 00:43:07.204 "data_offset": 0, 00:43:07.204 "data_size": 65536 00:43:07.204 }, 00:43:07.204 { 00:43:07.204 "name": "BaseBdev3", 00:43:07.204 "uuid": "3efcfd3c-0b27-4480-978a-f9ec15fd86bc", 00:43:07.204 "is_configured": true, 00:43:07.204 "data_offset": 0, 00:43:07.204 "data_size": 65536 00:43:07.204 } 00:43:07.204 ] 00:43:07.204 }' 00:43:07.204 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:07.204 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:07.462 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.720 [2024-12-09 05:33:54.467344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:07.720 [2024-12-09 05:33:54.467492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:07.720 [2024-12-09 05:33:54.543814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.720 [2024-12-09 05:33:54.607928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:43:07.720 [2024-12-09 05:33:54.607984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:07.720 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 BaseBdev2 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 [ 00:43:07.979 { 00:43:07.979 "name": "BaseBdev2", 00:43:07.979 "aliases": [ 00:43:07.979 "105bf789-d5e3-4d36-aac9-935c188aaa11" 00:43:07.979 ], 00:43:07.979 "product_name": "Malloc disk", 00:43:07.979 "block_size": 512, 00:43:07.979 "num_blocks": 65536, 00:43:07.979 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:07.979 "assigned_rate_limits": { 00:43:07.979 "rw_ios_per_sec": 0, 00:43:07.979 "rw_mbytes_per_sec": 0, 00:43:07.979 "r_mbytes_per_sec": 0, 00:43:07.979 "w_mbytes_per_sec": 0 00:43:07.979 }, 00:43:07.979 "claimed": false, 00:43:07.979 "zoned": false, 00:43:07.979 "supported_io_types": { 00:43:07.979 "read": true, 00:43:07.979 "write": true, 00:43:07.979 "unmap": true, 00:43:07.979 "flush": true, 00:43:07.979 "reset": true, 00:43:07.979 "nvme_admin": false, 00:43:07.979 "nvme_io": false, 00:43:07.979 "nvme_io_md": false, 00:43:07.979 "write_zeroes": true, 00:43:07.979 "zcopy": true, 00:43:07.979 "get_zone_info": false, 00:43:07.979 "zone_management": false, 00:43:07.979 "zone_append": false, 00:43:07.979 "compare": false, 00:43:07.979 "compare_and_write": false, 00:43:07.979 "abort": true, 00:43:07.979 "seek_hole": false, 00:43:07.979 "seek_data": false, 00:43:07.979 "copy": true, 00:43:07.979 "nvme_iov_md": false 00:43:07.979 }, 00:43:07.979 "memory_domains": [ 00:43:07.979 { 00:43:07.979 "dma_device_id": "system", 00:43:07.979 "dma_device_type": 1 00:43:07.979 }, 00:43:07.979 { 00:43:07.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:07.979 "dma_device_type": 2 00:43:07.979 } 00:43:07.979 ], 00:43:07.979 "driver_specific": {} 00:43:07.979 } 00:43:07.979 ] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 BaseBdev3 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.979 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.979 [ 00:43:07.979 { 00:43:07.979 "name": "BaseBdev3", 00:43:07.979 "aliases": [ 00:43:07.979 "5a1ac7fd-e981-4646-9f2f-ad565ccc041e" 00:43:07.979 ], 00:43:07.979 "product_name": "Malloc disk", 00:43:07.979 "block_size": 512, 00:43:07.980 "num_blocks": 65536, 00:43:07.980 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:07.980 "assigned_rate_limits": { 00:43:07.980 "rw_ios_per_sec": 0, 00:43:07.980 "rw_mbytes_per_sec": 0, 00:43:07.980 "r_mbytes_per_sec": 0, 00:43:07.980 "w_mbytes_per_sec": 0 00:43:07.980 }, 00:43:07.980 "claimed": false, 00:43:07.980 "zoned": false, 00:43:07.980 "supported_io_types": { 00:43:07.980 "read": true, 00:43:07.980 "write": true, 00:43:07.980 "unmap": true, 00:43:07.980 "flush": true, 00:43:07.980 "reset": true, 00:43:07.980 "nvme_admin": false, 00:43:07.980 "nvme_io": false, 00:43:07.980 "nvme_io_md": false, 00:43:07.980 "write_zeroes": true, 00:43:07.980 "zcopy": true, 00:43:07.980 "get_zone_info": false, 00:43:07.980 "zone_management": false, 00:43:07.980 "zone_append": false, 00:43:07.980 "compare": false, 00:43:07.980 "compare_and_write": false, 00:43:07.980 "abort": true, 00:43:07.980 "seek_hole": false, 00:43:07.980 "seek_data": false, 00:43:07.980 "copy": true, 00:43:07.980 "nvme_iov_md": false 00:43:07.980 }, 00:43:07.980 "memory_domains": [ 00:43:07.980 { 00:43:07.980 "dma_device_id": "system", 00:43:07.980 "dma_device_type": 1 00:43:07.980 }, 00:43:07.980 { 00:43:07.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:07.980 "dma_device_type": 2 00:43:07.980 } 00:43:07.980 ], 00:43:07.980 "driver_specific": {} 00:43:07.980 } 00:43:07.980 ] 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.980 [2024-12-09 05:33:54.895222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:07.980 [2024-12-09 05:33:54.895284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:07.980 [2024-12-09 05:33:54.895316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:07.980 [2024-12-09 05:33:54.897867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:07.980 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.238 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:08.238 "name": "Existed_Raid", 00:43:08.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.238 "strip_size_kb": 64, 00:43:08.238 "state": "configuring", 00:43:08.238 "raid_level": "raid5f", 00:43:08.238 "superblock": false, 00:43:08.238 "num_base_bdevs": 3, 00:43:08.238 "num_base_bdevs_discovered": 2, 00:43:08.238 "num_base_bdevs_operational": 3, 00:43:08.238 "base_bdevs_list": [ 00:43:08.238 { 00:43:08.238 "name": "BaseBdev1", 00:43:08.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.238 "is_configured": false, 00:43:08.239 "data_offset": 0, 00:43:08.239 "data_size": 0 00:43:08.239 }, 00:43:08.239 { 00:43:08.239 "name": "BaseBdev2", 00:43:08.239 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:08.239 "is_configured": true, 00:43:08.239 "data_offset": 0, 00:43:08.239 "data_size": 65536 00:43:08.239 }, 00:43:08.239 { 00:43:08.239 "name": "BaseBdev3", 00:43:08.239 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:08.239 "is_configured": true, 00:43:08.239 "data_offset": 0, 00:43:08.239 "data_size": 65536 00:43:08.239 } 00:43:08.239 ] 00:43:08.239 }' 00:43:08.239 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:08.239 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:08.497 [2024-12-09 05:33:55.432237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:08.497 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.754 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:08.754 "name": "Existed_Raid", 00:43:08.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.754 "strip_size_kb": 64, 00:43:08.754 "state": "configuring", 00:43:08.754 "raid_level": "raid5f", 00:43:08.754 "superblock": false, 00:43:08.754 "num_base_bdevs": 3, 00:43:08.754 "num_base_bdevs_discovered": 1, 00:43:08.754 "num_base_bdevs_operational": 3, 00:43:08.754 "base_bdevs_list": [ 00:43:08.754 { 00:43:08.754 "name": "BaseBdev1", 00:43:08.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.754 "is_configured": false, 00:43:08.754 "data_offset": 0, 00:43:08.754 "data_size": 0 00:43:08.754 }, 00:43:08.754 { 00:43:08.754 "name": null, 00:43:08.754 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:08.754 "is_configured": false, 00:43:08.754 "data_offset": 0, 00:43:08.754 "data_size": 65536 00:43:08.754 }, 00:43:08.754 { 00:43:08.754 "name": "BaseBdev3", 00:43:08.754 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:08.754 "is_configured": true, 00:43:08.754 "data_offset": 0, 00:43:08.754 "data_size": 65536 00:43:08.754 } 00:43:08.754 ] 00:43:08.754 }' 00:43:08.754 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:08.754 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.011 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.011 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.011 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:43:09.011 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.011 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.268 [2024-12-09 05:33:56.053268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:09.268 BaseBdev1 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.268 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.268 [ 00:43:09.268 { 00:43:09.268 "name": "BaseBdev1", 00:43:09.268 "aliases": [ 00:43:09.268 "b4c8d678-473f-4400-91c2-8356c1f37648" 00:43:09.268 ], 00:43:09.268 "product_name": "Malloc disk", 00:43:09.268 "block_size": 512, 00:43:09.268 "num_blocks": 65536, 00:43:09.268 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:09.268 "assigned_rate_limits": { 00:43:09.268 "rw_ios_per_sec": 0, 00:43:09.268 "rw_mbytes_per_sec": 0, 00:43:09.268 "r_mbytes_per_sec": 0, 00:43:09.268 "w_mbytes_per_sec": 0 00:43:09.268 }, 00:43:09.268 "claimed": true, 00:43:09.268 "claim_type": "exclusive_write", 00:43:09.268 "zoned": false, 00:43:09.268 "supported_io_types": { 00:43:09.268 "read": true, 00:43:09.268 "write": true, 00:43:09.268 "unmap": true, 00:43:09.268 "flush": true, 00:43:09.268 "reset": true, 00:43:09.269 "nvme_admin": false, 00:43:09.269 "nvme_io": false, 00:43:09.269 "nvme_io_md": false, 00:43:09.269 "write_zeroes": true, 00:43:09.269 "zcopy": true, 00:43:09.269 "get_zone_info": false, 00:43:09.269 "zone_management": false, 00:43:09.269 "zone_append": false, 00:43:09.269 "compare": false, 00:43:09.269 "compare_and_write": false, 00:43:09.269 "abort": true, 00:43:09.269 "seek_hole": false, 00:43:09.269 "seek_data": false, 00:43:09.269 "copy": true, 00:43:09.269 "nvme_iov_md": false 00:43:09.269 }, 00:43:09.269 "memory_domains": [ 00:43:09.269 { 00:43:09.269 "dma_device_id": "system", 00:43:09.269 "dma_device_type": 1 00:43:09.269 }, 00:43:09.269 { 00:43:09.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:09.269 "dma_device_type": 2 00:43:09.269 } 00:43:09.269 ], 00:43:09.269 "driver_specific": {} 00:43:09.269 } 00:43:09.269 ] 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:09.269 "name": "Existed_Raid", 00:43:09.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:09.269 "strip_size_kb": 64, 00:43:09.269 "state": "configuring", 00:43:09.269 "raid_level": "raid5f", 00:43:09.269 "superblock": false, 00:43:09.269 "num_base_bdevs": 3, 00:43:09.269 "num_base_bdevs_discovered": 2, 00:43:09.269 "num_base_bdevs_operational": 3, 00:43:09.269 "base_bdevs_list": [ 00:43:09.269 { 00:43:09.269 "name": "BaseBdev1", 00:43:09.269 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:09.269 "is_configured": true, 00:43:09.269 "data_offset": 0, 00:43:09.269 "data_size": 65536 00:43:09.269 }, 00:43:09.269 { 00:43:09.269 "name": null, 00:43:09.269 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:09.269 "is_configured": false, 00:43:09.269 "data_offset": 0, 00:43:09.269 "data_size": 65536 00:43:09.269 }, 00:43:09.269 { 00:43:09.269 "name": "BaseBdev3", 00:43:09.269 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:09.269 "is_configured": true, 00:43:09.269 "data_offset": 0, 00:43:09.269 "data_size": 65536 00:43:09.269 } 00:43:09.269 ] 00:43:09.269 }' 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:09.269 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.833 [2024-12-09 05:33:56.665478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:09.833 "name": "Existed_Raid", 00:43:09.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:09.833 "strip_size_kb": 64, 00:43:09.833 "state": "configuring", 00:43:09.833 "raid_level": "raid5f", 00:43:09.833 "superblock": false, 00:43:09.833 "num_base_bdevs": 3, 00:43:09.833 "num_base_bdevs_discovered": 1, 00:43:09.833 "num_base_bdevs_operational": 3, 00:43:09.833 "base_bdevs_list": [ 00:43:09.833 { 00:43:09.833 "name": "BaseBdev1", 00:43:09.833 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:09.833 "is_configured": true, 00:43:09.833 "data_offset": 0, 00:43:09.833 "data_size": 65536 00:43:09.833 }, 00:43:09.833 { 00:43:09.833 "name": null, 00:43:09.833 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:09.833 "is_configured": false, 00:43:09.833 "data_offset": 0, 00:43:09.833 "data_size": 65536 00:43:09.833 }, 00:43:09.833 { 00:43:09.833 "name": null, 00:43:09.833 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:09.833 "is_configured": false, 00:43:09.833 "data_offset": 0, 00:43:09.833 "data_size": 65536 00:43:09.833 } 00:43:09.833 ] 00:43:09.833 }' 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:09.833 05:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.399 [2024-12-09 05:33:57.273745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:10.399 "name": "Existed_Raid", 00:43:10.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:10.399 "strip_size_kb": 64, 00:43:10.399 "state": "configuring", 00:43:10.399 "raid_level": "raid5f", 00:43:10.399 "superblock": false, 00:43:10.399 "num_base_bdevs": 3, 00:43:10.399 "num_base_bdevs_discovered": 2, 00:43:10.399 "num_base_bdevs_operational": 3, 00:43:10.399 "base_bdevs_list": [ 00:43:10.399 { 00:43:10.399 "name": "BaseBdev1", 00:43:10.399 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:10.399 "is_configured": true, 00:43:10.399 "data_offset": 0, 00:43:10.399 "data_size": 65536 00:43:10.399 }, 00:43:10.399 { 00:43:10.399 "name": null, 00:43:10.399 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:10.399 "is_configured": false, 00:43:10.399 "data_offset": 0, 00:43:10.399 "data_size": 65536 00:43:10.399 }, 00:43:10.399 { 00:43:10.399 "name": "BaseBdev3", 00:43:10.399 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:10.399 "is_configured": true, 00:43:10.399 "data_offset": 0, 00:43:10.399 "data_size": 65536 00:43:10.399 } 00:43:10.399 ] 00:43:10.399 }' 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:10.399 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.965 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:10.965 [2024-12-09 05:33:57.861908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:11.224 "name": "Existed_Raid", 00:43:11.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:11.224 "strip_size_kb": 64, 00:43:11.224 "state": "configuring", 00:43:11.224 "raid_level": "raid5f", 00:43:11.224 "superblock": false, 00:43:11.224 "num_base_bdevs": 3, 00:43:11.224 "num_base_bdevs_discovered": 1, 00:43:11.224 "num_base_bdevs_operational": 3, 00:43:11.224 "base_bdevs_list": [ 00:43:11.224 { 00:43:11.224 "name": null, 00:43:11.224 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:11.224 "is_configured": false, 00:43:11.224 "data_offset": 0, 00:43:11.224 "data_size": 65536 00:43:11.224 }, 00:43:11.224 { 00:43:11.224 "name": null, 00:43:11.224 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:11.224 "is_configured": false, 00:43:11.224 "data_offset": 0, 00:43:11.224 "data_size": 65536 00:43:11.224 }, 00:43:11.224 { 00:43:11.224 "name": "BaseBdev3", 00:43:11.224 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:11.224 "is_configured": true, 00:43:11.224 "data_offset": 0, 00:43:11.224 "data_size": 65536 00:43:11.224 } 00:43:11.224 ] 00:43:11.224 }' 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:11.224 05:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:11.791 [2024-12-09 05:33:58.524949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:11.791 "name": "Existed_Raid", 00:43:11.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:11.791 "strip_size_kb": 64, 00:43:11.791 "state": "configuring", 00:43:11.791 "raid_level": "raid5f", 00:43:11.791 "superblock": false, 00:43:11.791 "num_base_bdevs": 3, 00:43:11.791 "num_base_bdevs_discovered": 2, 00:43:11.791 "num_base_bdevs_operational": 3, 00:43:11.791 "base_bdevs_list": [ 00:43:11.791 { 00:43:11.791 "name": null, 00:43:11.791 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:11.791 "is_configured": false, 00:43:11.791 "data_offset": 0, 00:43:11.791 "data_size": 65536 00:43:11.791 }, 00:43:11.791 { 00:43:11.791 "name": "BaseBdev2", 00:43:11.791 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:11.791 "is_configured": true, 00:43:11.791 "data_offset": 0, 00:43:11.791 "data_size": 65536 00:43:11.791 }, 00:43:11.791 { 00:43:11.791 "name": "BaseBdev3", 00:43:11.791 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:11.791 "is_configured": true, 00:43:11.791 "data_offset": 0, 00:43:11.791 "data_size": 65536 00:43:11.791 } 00:43:11.791 ] 00:43:11.791 }' 00:43:11.791 05:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:11.792 05:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b4c8d678-473f-4400-91c2-8356c1f37648 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.359 [2024-12-09 05:33:59.190472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:43:12.359 [2024-12-09 05:33:59.190556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:43:12.359 [2024-12-09 05:33:59.190573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:43:12.359 [2024-12-09 05:33:59.190944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:43:12.359 [2024-12-09 05:33:59.195688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:43:12.359 [2024-12-09 05:33:59.195715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:43:12.359 [2024-12-09 05:33:59.196121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:12.359 NewBaseBdev 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.359 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.359 [ 00:43:12.359 { 00:43:12.359 "name": "NewBaseBdev", 00:43:12.359 "aliases": [ 00:43:12.359 "b4c8d678-473f-4400-91c2-8356c1f37648" 00:43:12.359 ], 00:43:12.359 "product_name": "Malloc disk", 00:43:12.359 "block_size": 512, 00:43:12.359 "num_blocks": 65536, 00:43:12.359 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:12.359 "assigned_rate_limits": { 00:43:12.360 "rw_ios_per_sec": 0, 00:43:12.360 "rw_mbytes_per_sec": 0, 00:43:12.360 "r_mbytes_per_sec": 0, 00:43:12.360 "w_mbytes_per_sec": 0 00:43:12.360 }, 00:43:12.360 "claimed": true, 00:43:12.360 "claim_type": "exclusive_write", 00:43:12.360 "zoned": false, 00:43:12.360 "supported_io_types": { 00:43:12.360 "read": true, 00:43:12.360 "write": true, 00:43:12.360 "unmap": true, 00:43:12.360 "flush": true, 00:43:12.360 "reset": true, 00:43:12.360 "nvme_admin": false, 00:43:12.360 "nvme_io": false, 00:43:12.360 "nvme_io_md": false, 00:43:12.360 "write_zeroes": true, 00:43:12.360 "zcopy": true, 00:43:12.360 "get_zone_info": false, 00:43:12.360 "zone_management": false, 00:43:12.360 "zone_append": false, 00:43:12.360 "compare": false, 00:43:12.360 "compare_and_write": false, 00:43:12.360 "abort": true, 00:43:12.360 "seek_hole": false, 00:43:12.360 "seek_data": false, 00:43:12.360 "copy": true, 00:43:12.360 "nvme_iov_md": false 00:43:12.360 }, 00:43:12.360 "memory_domains": [ 00:43:12.360 { 00:43:12.360 "dma_device_id": "system", 00:43:12.360 "dma_device_type": 1 00:43:12.360 }, 00:43:12.360 { 00:43:12.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:12.360 "dma_device_type": 2 00:43:12.360 } 00:43:12.360 ], 00:43:12.360 "driver_specific": {} 00:43:12.360 } 00:43:12.360 ] 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:12.360 "name": "Existed_Raid", 00:43:12.360 "uuid": "2f57dc61-1f94-4913-865a-acc95e6ecef0", 00:43:12.360 "strip_size_kb": 64, 00:43:12.360 "state": "online", 00:43:12.360 "raid_level": "raid5f", 00:43:12.360 "superblock": false, 00:43:12.360 "num_base_bdevs": 3, 00:43:12.360 "num_base_bdevs_discovered": 3, 00:43:12.360 "num_base_bdevs_operational": 3, 00:43:12.360 "base_bdevs_list": [ 00:43:12.360 { 00:43:12.360 "name": "NewBaseBdev", 00:43:12.360 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:12.360 "is_configured": true, 00:43:12.360 "data_offset": 0, 00:43:12.360 "data_size": 65536 00:43:12.360 }, 00:43:12.360 { 00:43:12.360 "name": "BaseBdev2", 00:43:12.360 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:12.360 "is_configured": true, 00:43:12.360 "data_offset": 0, 00:43:12.360 "data_size": 65536 00:43:12.360 }, 00:43:12.360 { 00:43:12.360 "name": "BaseBdev3", 00:43:12.360 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:12.360 "is_configured": true, 00:43:12.360 "data_offset": 0, 00:43:12.360 "data_size": 65536 00:43:12.360 } 00:43:12.360 ] 00:43:12.360 }' 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:12.360 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.949 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:43:12.949 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:43:12.949 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:12.950 [2024-12-09 05:33:59.762206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:12.950 "name": "Existed_Raid", 00:43:12.950 "aliases": [ 00:43:12.950 "2f57dc61-1f94-4913-865a-acc95e6ecef0" 00:43:12.950 ], 00:43:12.950 "product_name": "Raid Volume", 00:43:12.950 "block_size": 512, 00:43:12.950 "num_blocks": 131072, 00:43:12.950 "uuid": "2f57dc61-1f94-4913-865a-acc95e6ecef0", 00:43:12.950 "assigned_rate_limits": { 00:43:12.950 "rw_ios_per_sec": 0, 00:43:12.950 "rw_mbytes_per_sec": 0, 00:43:12.950 "r_mbytes_per_sec": 0, 00:43:12.950 "w_mbytes_per_sec": 0 00:43:12.950 }, 00:43:12.950 "claimed": false, 00:43:12.950 "zoned": false, 00:43:12.950 "supported_io_types": { 00:43:12.950 "read": true, 00:43:12.950 "write": true, 00:43:12.950 "unmap": false, 00:43:12.950 "flush": false, 00:43:12.950 "reset": true, 00:43:12.950 "nvme_admin": false, 00:43:12.950 "nvme_io": false, 00:43:12.950 "nvme_io_md": false, 00:43:12.950 "write_zeroes": true, 00:43:12.950 "zcopy": false, 00:43:12.950 "get_zone_info": false, 00:43:12.950 "zone_management": false, 00:43:12.950 "zone_append": false, 00:43:12.950 "compare": false, 00:43:12.950 "compare_and_write": false, 00:43:12.950 "abort": false, 00:43:12.950 "seek_hole": false, 00:43:12.950 "seek_data": false, 00:43:12.950 "copy": false, 00:43:12.950 "nvme_iov_md": false 00:43:12.950 }, 00:43:12.950 "driver_specific": { 00:43:12.950 "raid": { 00:43:12.950 "uuid": "2f57dc61-1f94-4913-865a-acc95e6ecef0", 00:43:12.950 "strip_size_kb": 64, 00:43:12.950 "state": "online", 00:43:12.950 "raid_level": "raid5f", 00:43:12.950 "superblock": false, 00:43:12.950 "num_base_bdevs": 3, 00:43:12.950 "num_base_bdevs_discovered": 3, 00:43:12.950 "num_base_bdevs_operational": 3, 00:43:12.950 "base_bdevs_list": [ 00:43:12.950 { 00:43:12.950 "name": "NewBaseBdev", 00:43:12.950 "uuid": "b4c8d678-473f-4400-91c2-8356c1f37648", 00:43:12.950 "is_configured": true, 00:43:12.950 "data_offset": 0, 00:43:12.950 "data_size": 65536 00:43:12.950 }, 00:43:12.950 { 00:43:12.950 "name": "BaseBdev2", 00:43:12.950 "uuid": "105bf789-d5e3-4d36-aac9-935c188aaa11", 00:43:12.950 "is_configured": true, 00:43:12.950 "data_offset": 0, 00:43:12.950 "data_size": 65536 00:43:12.950 }, 00:43:12.950 { 00:43:12.950 "name": "BaseBdev3", 00:43:12.950 "uuid": "5a1ac7fd-e981-4646-9f2f-ad565ccc041e", 00:43:12.950 "is_configured": true, 00:43:12.950 "data_offset": 0, 00:43:12.950 "data_size": 65536 00:43:12.950 } 00:43:12.950 ] 00:43:12.950 } 00:43:12.950 } 00:43:12.950 }' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:43:12.950 BaseBdev2 00:43:12.950 BaseBdev3' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:12.950 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:13.208 05:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:13.208 [2024-12-09 05:34:00.073958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:13.208 [2024-12-09 05:34:00.073995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:13.208 [2024-12-09 05:34:00.074113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:13.208 [2024-12-09 05:34:00.074497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:13.208 [2024-12-09 05:34:00.074548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80335 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80335 ']' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80335 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80335 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:13.208 killing process with pid 80335 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80335' 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80335 00:43:13.208 [2024-12-09 05:34:00.113897] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:13.208 05:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80335 00:43:13.465 [2024-12-09 05:34:00.370726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:43:14.867 00:43:14.867 real 0m12.058s 00:43:14.867 user 0m20.021s 00:43:14.867 sys 0m1.698s 00:43:14.867 ************************************ 00:43:14.867 END TEST raid5f_state_function_test 00:43:14.867 ************************************ 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:43:14.867 05:34:01 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:43:14.867 05:34:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:14.867 05:34:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:14.867 05:34:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:14.867 ************************************ 00:43:14.867 START TEST raid5f_state_function_test_sb 00:43:14.867 ************************************ 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80968 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80968' 00:43:14.867 Process raid pid: 80968 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80968 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80968 ']' 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:14.867 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:14.867 [2024-12-09 05:34:01.680227] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:43:14.867 [2024-12-09 05:34:01.680782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:15.124 [2024-12-09 05:34:01.870267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.124 [2024-12-09 05:34:02.001007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.382 [2024-12-09 05:34:02.201668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:15.382 [2024-12-09 05:34:02.201720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:15.947 [2024-12-09 05:34:02.642389] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:15.947 [2024-12-09 05:34:02.642629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:15.947 [2024-12-09 05:34:02.642755] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:15.947 [2024-12-09 05:34:02.642834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:15.947 [2024-12-09 05:34:02.642951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:15.947 [2024-12-09 05:34:02.643010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:15.947 "name": "Existed_Raid", 00:43:15.947 "uuid": "9e4d83e7-a6a3-42e3-9c0e-57a85c6c0a2e", 00:43:15.947 "strip_size_kb": 64, 00:43:15.947 "state": "configuring", 00:43:15.947 "raid_level": "raid5f", 00:43:15.947 "superblock": true, 00:43:15.947 "num_base_bdevs": 3, 00:43:15.947 "num_base_bdevs_discovered": 0, 00:43:15.947 "num_base_bdevs_operational": 3, 00:43:15.947 "base_bdevs_list": [ 00:43:15.947 { 00:43:15.947 "name": "BaseBdev1", 00:43:15.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:15.947 "is_configured": false, 00:43:15.947 "data_offset": 0, 00:43:15.947 "data_size": 0 00:43:15.947 }, 00:43:15.947 { 00:43:15.947 "name": "BaseBdev2", 00:43:15.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:15.947 "is_configured": false, 00:43:15.947 "data_offset": 0, 00:43:15.947 "data_size": 0 00:43:15.947 }, 00:43:15.947 { 00:43:15.947 "name": "BaseBdev3", 00:43:15.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:15.947 "is_configured": false, 00:43:15.947 "data_offset": 0, 00:43:15.947 "data_size": 0 00:43:15.947 } 00:43:15.947 ] 00:43:15.947 }' 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:15.947 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.204 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:16.204 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.204 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.204 [2024-12-09 05:34:03.134432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:16.204 [2024-12-09 05:34:03.134480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:43:16.204 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.205 [2024-12-09 05:34:03.142452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:16.205 [2024-12-09 05:34:03.142699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:16.205 [2024-12-09 05:34:03.142831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:16.205 [2024-12-09 05:34:03.142866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:16.205 [2024-12-09 05:34:03.142879] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:16.205 [2024-12-09 05:34:03.142895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.205 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.463 [2024-12-09 05:34:03.190702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:16.463 BaseBdev1 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.463 [ 00:43:16.463 { 00:43:16.463 "name": "BaseBdev1", 00:43:16.463 "aliases": [ 00:43:16.463 "b5100e4a-194f-4d60-908d-28845db70626" 00:43:16.463 ], 00:43:16.463 "product_name": "Malloc disk", 00:43:16.463 "block_size": 512, 00:43:16.463 "num_blocks": 65536, 00:43:16.463 "uuid": "b5100e4a-194f-4d60-908d-28845db70626", 00:43:16.463 "assigned_rate_limits": { 00:43:16.463 "rw_ios_per_sec": 0, 00:43:16.463 "rw_mbytes_per_sec": 0, 00:43:16.463 "r_mbytes_per_sec": 0, 00:43:16.463 "w_mbytes_per_sec": 0 00:43:16.463 }, 00:43:16.463 "claimed": true, 00:43:16.463 "claim_type": "exclusive_write", 00:43:16.463 "zoned": false, 00:43:16.463 "supported_io_types": { 00:43:16.463 "read": true, 00:43:16.463 "write": true, 00:43:16.463 "unmap": true, 00:43:16.463 "flush": true, 00:43:16.463 "reset": true, 00:43:16.463 "nvme_admin": false, 00:43:16.463 "nvme_io": false, 00:43:16.463 "nvme_io_md": false, 00:43:16.463 "write_zeroes": true, 00:43:16.463 "zcopy": true, 00:43:16.463 "get_zone_info": false, 00:43:16.463 "zone_management": false, 00:43:16.463 "zone_append": false, 00:43:16.463 "compare": false, 00:43:16.463 "compare_and_write": false, 00:43:16.463 "abort": true, 00:43:16.463 "seek_hole": false, 00:43:16.463 "seek_data": false, 00:43:16.463 "copy": true, 00:43:16.463 "nvme_iov_md": false 00:43:16.463 }, 00:43:16.463 "memory_domains": [ 00:43:16.463 { 00:43:16.463 "dma_device_id": "system", 00:43:16.463 "dma_device_type": 1 00:43:16.463 }, 00:43:16.463 { 00:43:16.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:16.463 "dma_device_type": 2 00:43:16.463 } 00:43:16.463 ], 00:43:16.463 "driver_specific": {} 00:43:16.463 } 00:43:16.463 ] 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.463 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:16.463 "name": "Existed_Raid", 00:43:16.463 "uuid": "6355d6db-9359-4794-883e-8994902e9b14", 00:43:16.463 "strip_size_kb": 64, 00:43:16.463 "state": "configuring", 00:43:16.463 "raid_level": "raid5f", 00:43:16.463 "superblock": true, 00:43:16.463 "num_base_bdevs": 3, 00:43:16.464 "num_base_bdevs_discovered": 1, 00:43:16.464 "num_base_bdevs_operational": 3, 00:43:16.464 "base_bdevs_list": [ 00:43:16.464 { 00:43:16.464 "name": "BaseBdev1", 00:43:16.464 "uuid": "b5100e4a-194f-4d60-908d-28845db70626", 00:43:16.464 "is_configured": true, 00:43:16.464 "data_offset": 2048, 00:43:16.464 "data_size": 63488 00:43:16.464 }, 00:43:16.464 { 00:43:16.464 "name": "BaseBdev2", 00:43:16.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:16.464 "is_configured": false, 00:43:16.464 "data_offset": 0, 00:43:16.464 "data_size": 0 00:43:16.464 }, 00:43:16.464 { 00:43:16.464 "name": "BaseBdev3", 00:43:16.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:16.464 "is_configured": false, 00:43:16.464 "data_offset": 0, 00:43:16.464 "data_size": 0 00:43:16.464 } 00:43:16.464 ] 00:43:16.464 }' 00:43:16.464 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:16.464 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.030 [2024-12-09 05:34:03.778959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:17.030 [2024-12-09 05:34:03.779205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.030 [2024-12-09 05:34:03.787003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:17.030 [2024-12-09 05:34:03.789711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:17.030 [2024-12-09 05:34:03.789976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:17.030 [2024-12-09 05:34:03.790005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:17.030 [2024-12-09 05:34:03.790024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:17.030 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:17.031 "name": "Existed_Raid", 00:43:17.031 "uuid": "2e0a87c6-6099-4bff-a26c-4fd03afc8f75", 00:43:17.031 "strip_size_kb": 64, 00:43:17.031 "state": "configuring", 00:43:17.031 "raid_level": "raid5f", 00:43:17.031 "superblock": true, 00:43:17.031 "num_base_bdevs": 3, 00:43:17.031 "num_base_bdevs_discovered": 1, 00:43:17.031 "num_base_bdevs_operational": 3, 00:43:17.031 "base_bdevs_list": [ 00:43:17.031 { 00:43:17.031 "name": "BaseBdev1", 00:43:17.031 "uuid": "b5100e4a-194f-4d60-908d-28845db70626", 00:43:17.031 "is_configured": true, 00:43:17.031 "data_offset": 2048, 00:43:17.031 "data_size": 63488 00:43:17.031 }, 00:43:17.031 { 00:43:17.031 "name": "BaseBdev2", 00:43:17.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:17.031 "is_configured": false, 00:43:17.031 "data_offset": 0, 00:43:17.031 "data_size": 0 00:43:17.031 }, 00:43:17.031 { 00:43:17.031 "name": "BaseBdev3", 00:43:17.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:17.031 "is_configured": false, 00:43:17.031 "data_offset": 0, 00:43:17.031 "data_size": 0 00:43:17.031 } 00:43:17.031 ] 00:43:17.031 }' 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:17.031 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.599 [2024-12-09 05:34:04.323702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:17.599 BaseBdev2 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.599 [ 00:43:17.599 { 00:43:17.599 "name": "BaseBdev2", 00:43:17.599 "aliases": [ 00:43:17.599 "5dec0b87-79a0-4f1f-b503-b0e835100557" 00:43:17.599 ], 00:43:17.599 "product_name": "Malloc disk", 00:43:17.599 "block_size": 512, 00:43:17.599 "num_blocks": 65536, 00:43:17.599 "uuid": "5dec0b87-79a0-4f1f-b503-b0e835100557", 00:43:17.599 "assigned_rate_limits": { 00:43:17.599 "rw_ios_per_sec": 0, 00:43:17.599 "rw_mbytes_per_sec": 0, 00:43:17.599 "r_mbytes_per_sec": 0, 00:43:17.599 "w_mbytes_per_sec": 0 00:43:17.599 }, 00:43:17.599 "claimed": true, 00:43:17.599 "claim_type": "exclusive_write", 00:43:17.599 "zoned": false, 00:43:17.599 "supported_io_types": { 00:43:17.599 "read": true, 00:43:17.599 "write": true, 00:43:17.599 "unmap": true, 00:43:17.599 "flush": true, 00:43:17.599 "reset": true, 00:43:17.599 "nvme_admin": false, 00:43:17.599 "nvme_io": false, 00:43:17.599 "nvme_io_md": false, 00:43:17.599 "write_zeroes": true, 00:43:17.599 "zcopy": true, 00:43:17.599 "get_zone_info": false, 00:43:17.599 "zone_management": false, 00:43:17.599 "zone_append": false, 00:43:17.599 "compare": false, 00:43:17.599 "compare_and_write": false, 00:43:17.599 "abort": true, 00:43:17.599 "seek_hole": false, 00:43:17.599 "seek_data": false, 00:43:17.599 "copy": true, 00:43:17.599 "nvme_iov_md": false 00:43:17.599 }, 00:43:17.599 "memory_domains": [ 00:43:17.599 { 00:43:17.599 "dma_device_id": "system", 00:43:17.599 "dma_device_type": 1 00:43:17.599 }, 00:43:17.599 { 00:43:17.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:17.599 "dma_device_type": 2 00:43:17.599 } 00:43:17.599 ], 00:43:17.599 "driver_specific": {} 00:43:17.599 } 00:43:17.599 ] 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:17.599 "name": "Existed_Raid", 00:43:17.599 "uuid": "2e0a87c6-6099-4bff-a26c-4fd03afc8f75", 00:43:17.599 "strip_size_kb": 64, 00:43:17.599 "state": "configuring", 00:43:17.599 "raid_level": "raid5f", 00:43:17.599 "superblock": true, 00:43:17.599 "num_base_bdevs": 3, 00:43:17.599 "num_base_bdevs_discovered": 2, 00:43:17.599 "num_base_bdevs_operational": 3, 00:43:17.599 "base_bdevs_list": [ 00:43:17.599 { 00:43:17.599 "name": "BaseBdev1", 00:43:17.599 "uuid": "b5100e4a-194f-4d60-908d-28845db70626", 00:43:17.599 "is_configured": true, 00:43:17.599 "data_offset": 2048, 00:43:17.599 "data_size": 63488 00:43:17.599 }, 00:43:17.599 { 00:43:17.599 "name": "BaseBdev2", 00:43:17.599 "uuid": "5dec0b87-79a0-4f1f-b503-b0e835100557", 00:43:17.599 "is_configured": true, 00:43:17.599 "data_offset": 2048, 00:43:17.599 "data_size": 63488 00:43:17.599 }, 00:43:17.599 { 00:43:17.599 "name": "BaseBdev3", 00:43:17.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:17.599 "is_configured": false, 00:43:17.599 "data_offset": 0, 00:43:17.599 "data_size": 0 00:43:17.599 } 00:43:17.599 ] 00:43:17.599 }' 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:17.599 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 [2024-12-09 05:34:04.946980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:18.167 [2024-12-09 05:34:04.947522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:18.167 [2024-12-09 05:34:04.947555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:18.167 BaseBdev3 00:43:18.167 [2024-12-09 05:34:04.947935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 [2024-12-09 05:34:04.952946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:18.167 [2024-12-09 05:34:04.952981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:43:18.167 [2024-12-09 05:34:04.953270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 [ 00:43:18.167 { 00:43:18.167 "name": "BaseBdev3", 00:43:18.167 "aliases": [ 00:43:18.167 "6ecaf535-18c8-46bd-86b2-6dd0cbbf3493" 00:43:18.167 ], 00:43:18.167 "product_name": "Malloc disk", 00:43:18.167 "block_size": 512, 00:43:18.167 "num_blocks": 65536, 00:43:18.167 "uuid": "6ecaf535-18c8-46bd-86b2-6dd0cbbf3493", 00:43:18.167 "assigned_rate_limits": { 00:43:18.167 "rw_ios_per_sec": 0, 00:43:18.167 "rw_mbytes_per_sec": 0, 00:43:18.167 "r_mbytes_per_sec": 0, 00:43:18.167 "w_mbytes_per_sec": 0 00:43:18.167 }, 00:43:18.167 "claimed": true, 00:43:18.167 "claim_type": "exclusive_write", 00:43:18.167 "zoned": false, 00:43:18.167 "supported_io_types": { 00:43:18.167 "read": true, 00:43:18.167 "write": true, 00:43:18.167 "unmap": true, 00:43:18.167 "flush": true, 00:43:18.167 "reset": true, 00:43:18.167 "nvme_admin": false, 00:43:18.167 "nvme_io": false, 00:43:18.167 "nvme_io_md": false, 00:43:18.167 "write_zeroes": true, 00:43:18.167 "zcopy": true, 00:43:18.167 "get_zone_info": false, 00:43:18.167 "zone_management": false, 00:43:18.167 "zone_append": false, 00:43:18.167 "compare": false, 00:43:18.167 "compare_and_write": false, 00:43:18.167 "abort": true, 00:43:18.167 "seek_hole": false, 00:43:18.167 "seek_data": false, 00:43:18.167 "copy": true, 00:43:18.167 "nvme_iov_md": false 00:43:18.167 }, 00:43:18.167 "memory_domains": [ 00:43:18.167 { 00:43:18.167 "dma_device_id": "system", 00:43:18.167 "dma_device_type": 1 00:43:18.167 }, 00:43:18.167 { 00:43:18.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:18.167 "dma_device_type": 2 00:43:18.167 } 00:43:18.167 ], 00:43:18.167 "driver_specific": {} 00:43:18.167 } 00:43:18.167 ] 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.167 05:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.167 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.167 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:18.167 "name": "Existed_Raid", 00:43:18.167 "uuid": "2e0a87c6-6099-4bff-a26c-4fd03afc8f75", 00:43:18.167 "strip_size_kb": 64, 00:43:18.167 "state": "online", 00:43:18.167 "raid_level": "raid5f", 00:43:18.167 "superblock": true, 00:43:18.167 "num_base_bdevs": 3, 00:43:18.167 "num_base_bdevs_discovered": 3, 00:43:18.167 "num_base_bdevs_operational": 3, 00:43:18.167 "base_bdevs_list": [ 00:43:18.167 { 00:43:18.167 "name": "BaseBdev1", 00:43:18.167 "uuid": "b5100e4a-194f-4d60-908d-28845db70626", 00:43:18.167 "is_configured": true, 00:43:18.167 "data_offset": 2048, 00:43:18.167 "data_size": 63488 00:43:18.167 }, 00:43:18.167 { 00:43:18.167 "name": "BaseBdev2", 00:43:18.168 "uuid": "5dec0b87-79a0-4f1f-b503-b0e835100557", 00:43:18.168 "is_configured": true, 00:43:18.168 "data_offset": 2048, 00:43:18.168 "data_size": 63488 00:43:18.168 }, 00:43:18.168 { 00:43:18.168 "name": "BaseBdev3", 00:43:18.168 "uuid": "6ecaf535-18c8-46bd-86b2-6dd0cbbf3493", 00:43:18.168 "is_configured": true, 00:43:18.168 "data_offset": 2048, 00:43:18.168 "data_size": 63488 00:43:18.168 } 00:43:18.168 ] 00:43:18.168 }' 00:43:18.168 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:18.168 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.742 [2024-12-09 05:34:05.507396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:18.742 "name": "Existed_Raid", 00:43:18.742 "aliases": [ 00:43:18.742 "2e0a87c6-6099-4bff-a26c-4fd03afc8f75" 00:43:18.742 ], 00:43:18.742 "product_name": "Raid Volume", 00:43:18.742 "block_size": 512, 00:43:18.742 "num_blocks": 126976, 00:43:18.742 "uuid": "2e0a87c6-6099-4bff-a26c-4fd03afc8f75", 00:43:18.742 "assigned_rate_limits": { 00:43:18.742 "rw_ios_per_sec": 0, 00:43:18.742 "rw_mbytes_per_sec": 0, 00:43:18.742 "r_mbytes_per_sec": 0, 00:43:18.742 "w_mbytes_per_sec": 0 00:43:18.742 }, 00:43:18.742 "claimed": false, 00:43:18.742 "zoned": false, 00:43:18.742 "supported_io_types": { 00:43:18.742 "read": true, 00:43:18.742 "write": true, 00:43:18.742 "unmap": false, 00:43:18.742 "flush": false, 00:43:18.742 "reset": true, 00:43:18.742 "nvme_admin": false, 00:43:18.742 "nvme_io": false, 00:43:18.742 "nvme_io_md": false, 00:43:18.742 "write_zeroes": true, 00:43:18.742 "zcopy": false, 00:43:18.742 "get_zone_info": false, 00:43:18.742 "zone_management": false, 00:43:18.742 "zone_append": false, 00:43:18.742 "compare": false, 00:43:18.742 "compare_and_write": false, 00:43:18.742 "abort": false, 00:43:18.742 "seek_hole": false, 00:43:18.742 "seek_data": false, 00:43:18.742 "copy": false, 00:43:18.742 "nvme_iov_md": false 00:43:18.742 }, 00:43:18.742 "driver_specific": { 00:43:18.742 "raid": { 00:43:18.742 "uuid": "2e0a87c6-6099-4bff-a26c-4fd03afc8f75", 00:43:18.742 "strip_size_kb": 64, 00:43:18.742 "state": "online", 00:43:18.742 "raid_level": "raid5f", 00:43:18.742 "superblock": true, 00:43:18.742 "num_base_bdevs": 3, 00:43:18.742 "num_base_bdevs_discovered": 3, 00:43:18.742 "num_base_bdevs_operational": 3, 00:43:18.742 "base_bdevs_list": [ 00:43:18.742 { 00:43:18.742 "name": "BaseBdev1", 00:43:18.742 "uuid": "b5100e4a-194f-4d60-908d-28845db70626", 00:43:18.742 "is_configured": true, 00:43:18.742 "data_offset": 2048, 00:43:18.742 "data_size": 63488 00:43:18.742 }, 00:43:18.742 { 00:43:18.742 "name": "BaseBdev2", 00:43:18.742 "uuid": "5dec0b87-79a0-4f1f-b503-b0e835100557", 00:43:18.742 "is_configured": true, 00:43:18.742 "data_offset": 2048, 00:43:18.742 "data_size": 63488 00:43:18.742 }, 00:43:18.742 { 00:43:18.742 "name": "BaseBdev3", 00:43:18.742 "uuid": "6ecaf535-18c8-46bd-86b2-6dd0cbbf3493", 00:43:18.742 "is_configured": true, 00:43:18.742 "data_offset": 2048, 00:43:18.742 "data_size": 63488 00:43:18.742 } 00:43:18.742 ] 00:43:18.742 } 00:43:18.742 } 00:43:18.742 }' 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:18.742 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:43:18.742 BaseBdev2 00:43:18.742 BaseBdev3' 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:18.743 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.001 [2024-12-09 05:34:05.831254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:43:19.001 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:19.002 "name": "Existed_Raid", 00:43:19.002 "uuid": "2e0a87c6-6099-4bff-a26c-4fd03afc8f75", 00:43:19.002 "strip_size_kb": 64, 00:43:19.002 "state": "online", 00:43:19.002 "raid_level": "raid5f", 00:43:19.002 "superblock": true, 00:43:19.002 "num_base_bdevs": 3, 00:43:19.002 "num_base_bdevs_discovered": 2, 00:43:19.002 "num_base_bdevs_operational": 2, 00:43:19.002 "base_bdevs_list": [ 00:43:19.002 { 00:43:19.002 "name": null, 00:43:19.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:19.002 "is_configured": false, 00:43:19.002 "data_offset": 0, 00:43:19.002 "data_size": 63488 00:43:19.002 }, 00:43:19.002 { 00:43:19.002 "name": "BaseBdev2", 00:43:19.002 "uuid": "5dec0b87-79a0-4f1f-b503-b0e835100557", 00:43:19.002 "is_configured": true, 00:43:19.002 "data_offset": 2048, 00:43:19.002 "data_size": 63488 00:43:19.002 }, 00:43:19.002 { 00:43:19.002 "name": "BaseBdev3", 00:43:19.002 "uuid": "6ecaf535-18c8-46bd-86b2-6dd0cbbf3493", 00:43:19.002 "is_configured": true, 00:43:19.002 "data_offset": 2048, 00:43:19.002 "data_size": 63488 00:43:19.002 } 00:43:19.002 ] 00:43:19.002 }' 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:19.002 05:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.567 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.567 [2024-12-09 05:34:06.469881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:19.567 [2024-12-09 05:34:06.470119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:19.825 [2024-12-09 05:34:06.561938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.825 [2024-12-09 05:34:06.626002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:43:19.825 [2024-12-09 05:34:06.626091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.825 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.084 BaseBdev2 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.084 [ 00:43:20.084 { 00:43:20.084 "name": "BaseBdev2", 00:43:20.084 "aliases": [ 00:43:20.084 "c0823309-a021-4814-93cc-38577e7ece67" 00:43:20.084 ], 00:43:20.084 "product_name": "Malloc disk", 00:43:20.084 "block_size": 512, 00:43:20.084 "num_blocks": 65536, 00:43:20.084 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:20.084 "assigned_rate_limits": { 00:43:20.084 "rw_ios_per_sec": 0, 00:43:20.084 "rw_mbytes_per_sec": 0, 00:43:20.084 "r_mbytes_per_sec": 0, 00:43:20.084 "w_mbytes_per_sec": 0 00:43:20.084 }, 00:43:20.084 "claimed": false, 00:43:20.084 "zoned": false, 00:43:20.084 "supported_io_types": { 00:43:20.084 "read": true, 00:43:20.084 "write": true, 00:43:20.084 "unmap": true, 00:43:20.084 "flush": true, 00:43:20.084 "reset": true, 00:43:20.084 "nvme_admin": false, 00:43:20.084 "nvme_io": false, 00:43:20.084 "nvme_io_md": false, 00:43:20.084 "write_zeroes": true, 00:43:20.084 "zcopy": true, 00:43:20.084 "get_zone_info": false, 00:43:20.084 "zone_management": false, 00:43:20.084 "zone_append": false, 00:43:20.084 "compare": false, 00:43:20.084 "compare_and_write": false, 00:43:20.084 "abort": true, 00:43:20.084 "seek_hole": false, 00:43:20.084 "seek_data": false, 00:43:20.084 "copy": true, 00:43:20.084 "nvme_iov_md": false 00:43:20.084 }, 00:43:20.084 "memory_domains": [ 00:43:20.084 { 00:43:20.084 "dma_device_id": "system", 00:43:20.084 "dma_device_type": 1 00:43:20.084 }, 00:43:20.084 { 00:43:20.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:20.084 "dma_device_type": 2 00:43:20.084 } 00:43:20.084 ], 00:43:20.084 "driver_specific": {} 00:43:20.084 } 00:43:20.084 ] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.084 BaseBdev3 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.084 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.084 [ 00:43:20.084 { 00:43:20.084 "name": "BaseBdev3", 00:43:20.084 "aliases": [ 00:43:20.084 "c8d2f19d-6a35-45eb-9c97-606045ef1ae9" 00:43:20.084 ], 00:43:20.084 "product_name": "Malloc disk", 00:43:20.084 "block_size": 512, 00:43:20.084 "num_blocks": 65536, 00:43:20.084 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:20.084 "assigned_rate_limits": { 00:43:20.084 "rw_ios_per_sec": 0, 00:43:20.084 "rw_mbytes_per_sec": 0, 00:43:20.084 "r_mbytes_per_sec": 0, 00:43:20.084 "w_mbytes_per_sec": 0 00:43:20.084 }, 00:43:20.084 "claimed": false, 00:43:20.084 "zoned": false, 00:43:20.084 "supported_io_types": { 00:43:20.084 "read": true, 00:43:20.084 "write": true, 00:43:20.084 "unmap": true, 00:43:20.084 "flush": true, 00:43:20.084 "reset": true, 00:43:20.084 "nvme_admin": false, 00:43:20.084 "nvme_io": false, 00:43:20.085 "nvme_io_md": false, 00:43:20.085 "write_zeroes": true, 00:43:20.085 "zcopy": true, 00:43:20.085 "get_zone_info": false, 00:43:20.085 "zone_management": false, 00:43:20.085 "zone_append": false, 00:43:20.085 "compare": false, 00:43:20.085 "compare_and_write": false, 00:43:20.085 "abort": true, 00:43:20.085 "seek_hole": false, 00:43:20.085 "seek_data": false, 00:43:20.085 "copy": true, 00:43:20.085 "nvme_iov_md": false 00:43:20.085 }, 00:43:20.085 "memory_domains": [ 00:43:20.085 { 00:43:20.085 "dma_device_id": "system", 00:43:20.085 "dma_device_type": 1 00:43:20.085 }, 00:43:20.085 { 00:43:20.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:20.085 "dma_device_type": 2 00:43:20.085 } 00:43:20.085 ], 00:43:20.085 "driver_specific": {} 00:43:20.085 } 00:43:20.085 ] 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.085 [2024-12-09 05:34:06.918376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:20.085 [2024-12-09 05:34:06.918435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:20.085 [2024-12-09 05:34:06.918466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:20.085 [2024-12-09 05:34:06.920949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:20.085 "name": "Existed_Raid", 00:43:20.085 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:20.085 "strip_size_kb": 64, 00:43:20.085 "state": "configuring", 00:43:20.085 "raid_level": "raid5f", 00:43:20.085 "superblock": true, 00:43:20.085 "num_base_bdevs": 3, 00:43:20.085 "num_base_bdevs_discovered": 2, 00:43:20.085 "num_base_bdevs_operational": 3, 00:43:20.085 "base_bdevs_list": [ 00:43:20.085 { 00:43:20.085 "name": "BaseBdev1", 00:43:20.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:20.085 "is_configured": false, 00:43:20.085 "data_offset": 0, 00:43:20.085 "data_size": 0 00:43:20.085 }, 00:43:20.085 { 00:43:20.085 "name": "BaseBdev2", 00:43:20.085 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:20.085 "is_configured": true, 00:43:20.085 "data_offset": 2048, 00:43:20.085 "data_size": 63488 00:43:20.085 }, 00:43:20.085 { 00:43:20.085 "name": "BaseBdev3", 00:43:20.085 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:20.085 "is_configured": true, 00:43:20.085 "data_offset": 2048, 00:43:20.085 "data_size": 63488 00:43:20.085 } 00:43:20.085 ] 00:43:20.085 }' 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:20.085 05:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.651 [2024-12-09 05:34:07.402617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:20.651 "name": "Existed_Raid", 00:43:20.651 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:20.651 "strip_size_kb": 64, 00:43:20.651 "state": "configuring", 00:43:20.651 "raid_level": "raid5f", 00:43:20.651 "superblock": true, 00:43:20.651 "num_base_bdevs": 3, 00:43:20.651 "num_base_bdevs_discovered": 1, 00:43:20.651 "num_base_bdevs_operational": 3, 00:43:20.651 "base_bdevs_list": [ 00:43:20.651 { 00:43:20.651 "name": "BaseBdev1", 00:43:20.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:20.651 "is_configured": false, 00:43:20.651 "data_offset": 0, 00:43:20.651 "data_size": 0 00:43:20.651 }, 00:43:20.651 { 00:43:20.651 "name": null, 00:43:20.651 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:20.651 "is_configured": false, 00:43:20.651 "data_offset": 0, 00:43:20.651 "data_size": 63488 00:43:20.651 }, 00:43:20.651 { 00:43:20.651 "name": "BaseBdev3", 00:43:20.651 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:20.651 "is_configured": true, 00:43:20.651 "data_offset": 2048, 00:43:20.651 "data_size": 63488 00:43:20.651 } 00:43:20.651 ] 00:43:20.651 }' 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:20.651 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:20.909 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.909 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:43:20.909 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.909 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.169 [2024-12-09 05:34:07.967021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:21.169 BaseBdev1 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.169 05:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.169 [ 00:43:21.169 { 00:43:21.169 "name": "BaseBdev1", 00:43:21.169 "aliases": [ 00:43:21.169 "a33ac875-646e-4227-91ef-b8d932ada266" 00:43:21.169 ], 00:43:21.169 "product_name": "Malloc disk", 00:43:21.169 "block_size": 512, 00:43:21.169 "num_blocks": 65536, 00:43:21.169 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:21.169 "assigned_rate_limits": { 00:43:21.169 "rw_ios_per_sec": 0, 00:43:21.169 "rw_mbytes_per_sec": 0, 00:43:21.169 "r_mbytes_per_sec": 0, 00:43:21.169 "w_mbytes_per_sec": 0 00:43:21.169 }, 00:43:21.169 "claimed": true, 00:43:21.169 "claim_type": "exclusive_write", 00:43:21.169 "zoned": false, 00:43:21.169 "supported_io_types": { 00:43:21.169 "read": true, 00:43:21.169 "write": true, 00:43:21.169 "unmap": true, 00:43:21.169 "flush": true, 00:43:21.169 "reset": true, 00:43:21.169 "nvme_admin": false, 00:43:21.169 "nvme_io": false, 00:43:21.169 "nvme_io_md": false, 00:43:21.169 "write_zeroes": true, 00:43:21.169 "zcopy": true, 00:43:21.169 "get_zone_info": false, 00:43:21.169 "zone_management": false, 00:43:21.169 "zone_append": false, 00:43:21.169 "compare": false, 00:43:21.169 "compare_and_write": false, 00:43:21.169 "abort": true, 00:43:21.169 "seek_hole": false, 00:43:21.169 "seek_data": false, 00:43:21.169 "copy": true, 00:43:21.169 "nvme_iov_md": false 00:43:21.169 }, 00:43:21.169 "memory_domains": [ 00:43:21.169 { 00:43:21.169 "dma_device_id": "system", 00:43:21.169 "dma_device_type": 1 00:43:21.169 }, 00:43:21.169 { 00:43:21.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:21.169 "dma_device_type": 2 00:43:21.169 } 00:43:21.169 ], 00:43:21.169 "driver_specific": {} 00:43:21.169 } 00:43:21.169 ] 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.169 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:21.169 "name": "Existed_Raid", 00:43:21.169 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:21.169 "strip_size_kb": 64, 00:43:21.169 "state": "configuring", 00:43:21.169 "raid_level": "raid5f", 00:43:21.169 "superblock": true, 00:43:21.169 "num_base_bdevs": 3, 00:43:21.169 "num_base_bdevs_discovered": 2, 00:43:21.169 "num_base_bdevs_operational": 3, 00:43:21.169 "base_bdevs_list": [ 00:43:21.169 { 00:43:21.169 "name": "BaseBdev1", 00:43:21.169 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:21.169 "is_configured": true, 00:43:21.169 "data_offset": 2048, 00:43:21.169 "data_size": 63488 00:43:21.169 }, 00:43:21.169 { 00:43:21.169 "name": null, 00:43:21.169 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:21.169 "is_configured": false, 00:43:21.169 "data_offset": 0, 00:43:21.170 "data_size": 63488 00:43:21.170 }, 00:43:21.170 { 00:43:21.170 "name": "BaseBdev3", 00:43:21.170 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:21.170 "is_configured": true, 00:43:21.170 "data_offset": 2048, 00:43:21.170 "data_size": 63488 00:43:21.170 } 00:43:21.170 ] 00:43:21.170 }' 00:43:21.170 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:21.170 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.741 [2024-12-09 05:34:08.575349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:21.741 "name": "Existed_Raid", 00:43:21.741 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:21.741 "strip_size_kb": 64, 00:43:21.741 "state": "configuring", 00:43:21.741 "raid_level": "raid5f", 00:43:21.741 "superblock": true, 00:43:21.741 "num_base_bdevs": 3, 00:43:21.741 "num_base_bdevs_discovered": 1, 00:43:21.741 "num_base_bdevs_operational": 3, 00:43:21.741 "base_bdevs_list": [ 00:43:21.741 { 00:43:21.741 "name": "BaseBdev1", 00:43:21.741 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:21.741 "is_configured": true, 00:43:21.741 "data_offset": 2048, 00:43:21.741 "data_size": 63488 00:43:21.741 }, 00:43:21.741 { 00:43:21.741 "name": null, 00:43:21.741 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:21.741 "is_configured": false, 00:43:21.741 "data_offset": 0, 00:43:21.741 "data_size": 63488 00:43:21.741 }, 00:43:21.741 { 00:43:21.741 "name": null, 00:43:21.741 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:21.741 "is_configured": false, 00:43:21.741 "data_offset": 0, 00:43:21.741 "data_size": 63488 00:43:21.741 } 00:43:21.741 ] 00:43:21.741 }' 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:21.741 05:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.309 [2024-12-09 05:34:09.139476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.309 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:22.309 "name": "Existed_Raid", 00:43:22.309 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:22.309 "strip_size_kb": 64, 00:43:22.309 "state": "configuring", 00:43:22.309 "raid_level": "raid5f", 00:43:22.309 "superblock": true, 00:43:22.309 "num_base_bdevs": 3, 00:43:22.309 "num_base_bdevs_discovered": 2, 00:43:22.309 "num_base_bdevs_operational": 3, 00:43:22.309 "base_bdevs_list": [ 00:43:22.309 { 00:43:22.310 "name": "BaseBdev1", 00:43:22.310 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:22.310 "is_configured": true, 00:43:22.310 "data_offset": 2048, 00:43:22.310 "data_size": 63488 00:43:22.310 }, 00:43:22.310 { 00:43:22.310 "name": null, 00:43:22.310 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:22.310 "is_configured": false, 00:43:22.310 "data_offset": 0, 00:43:22.310 "data_size": 63488 00:43:22.310 }, 00:43:22.310 { 00:43:22.310 "name": "BaseBdev3", 00:43:22.310 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:22.310 "is_configured": true, 00:43:22.310 "data_offset": 2048, 00:43:22.310 "data_size": 63488 00:43:22.310 } 00:43:22.310 ] 00:43:22.310 }' 00:43:22.310 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:22.310 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:22.877 [2024-12-09 05:34:09.755690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.877 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:23.136 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.136 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:23.136 "name": "Existed_Raid", 00:43:23.136 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:23.136 "strip_size_kb": 64, 00:43:23.136 "state": "configuring", 00:43:23.136 "raid_level": "raid5f", 00:43:23.136 "superblock": true, 00:43:23.136 "num_base_bdevs": 3, 00:43:23.136 "num_base_bdevs_discovered": 1, 00:43:23.136 "num_base_bdevs_operational": 3, 00:43:23.136 "base_bdevs_list": [ 00:43:23.136 { 00:43:23.136 "name": null, 00:43:23.136 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:23.136 "is_configured": false, 00:43:23.136 "data_offset": 0, 00:43:23.136 "data_size": 63488 00:43:23.136 }, 00:43:23.136 { 00:43:23.136 "name": null, 00:43:23.136 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:23.136 "is_configured": false, 00:43:23.136 "data_offset": 0, 00:43:23.136 "data_size": 63488 00:43:23.136 }, 00:43:23.136 { 00:43:23.136 "name": "BaseBdev3", 00:43:23.136 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:23.136 "is_configured": true, 00:43:23.136 "data_offset": 2048, 00:43:23.157 "data_size": 63488 00:43:23.157 } 00:43:23.157 ] 00:43:23.157 }' 00:43:23.157 05:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:23.157 05:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:23.415 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:23.415 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:43:23.415 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.415 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:23.415 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:23.673 [2024-12-09 05:34:10.415238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.673 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:23.673 "name": "Existed_Raid", 00:43:23.673 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:23.673 "strip_size_kb": 64, 00:43:23.673 "state": "configuring", 00:43:23.673 "raid_level": "raid5f", 00:43:23.673 "superblock": true, 00:43:23.673 "num_base_bdevs": 3, 00:43:23.673 "num_base_bdevs_discovered": 2, 00:43:23.673 "num_base_bdevs_operational": 3, 00:43:23.673 "base_bdevs_list": [ 00:43:23.673 { 00:43:23.673 "name": null, 00:43:23.673 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:23.673 "is_configured": false, 00:43:23.673 "data_offset": 0, 00:43:23.673 "data_size": 63488 00:43:23.673 }, 00:43:23.673 { 00:43:23.673 "name": "BaseBdev2", 00:43:23.673 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:23.673 "is_configured": true, 00:43:23.673 "data_offset": 2048, 00:43:23.673 "data_size": 63488 00:43:23.673 }, 00:43:23.673 { 00:43:23.673 "name": "BaseBdev3", 00:43:23.674 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:23.674 "is_configured": true, 00:43:23.674 "data_offset": 2048, 00:43:23.674 "data_size": 63488 00:43:23.674 } 00:43:23.674 ] 00:43:23.674 }' 00:43:23.674 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:23.674 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:43:24.239 05:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a33ac875-646e-4227-91ef-b8d932ada266 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.239 [2024-12-09 05:34:11.096368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:43:24.239 [2024-12-09 05:34:11.096637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:43:24.239 [2024-12-09 05:34:11.096660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:24.239 NewBaseBdev 00:43:24.239 [2024-12-09 05:34:11.097042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.239 [2024-12-09 05:34:11.102263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:43:24.239 [2024-12-09 05:34:11.102287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:43:24.239 [2024-12-09 05:34:11.102558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.239 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.239 [ 00:43:24.239 { 00:43:24.239 "name": "NewBaseBdev", 00:43:24.239 "aliases": [ 00:43:24.239 "a33ac875-646e-4227-91ef-b8d932ada266" 00:43:24.239 ], 00:43:24.239 "product_name": "Malloc disk", 00:43:24.239 "block_size": 512, 00:43:24.239 "num_blocks": 65536, 00:43:24.239 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:24.239 "assigned_rate_limits": { 00:43:24.239 "rw_ios_per_sec": 0, 00:43:24.239 "rw_mbytes_per_sec": 0, 00:43:24.239 "r_mbytes_per_sec": 0, 00:43:24.239 "w_mbytes_per_sec": 0 00:43:24.239 }, 00:43:24.239 "claimed": true, 00:43:24.239 "claim_type": "exclusive_write", 00:43:24.239 "zoned": false, 00:43:24.239 "supported_io_types": { 00:43:24.239 "read": true, 00:43:24.239 "write": true, 00:43:24.239 "unmap": true, 00:43:24.239 "flush": true, 00:43:24.239 "reset": true, 00:43:24.239 "nvme_admin": false, 00:43:24.239 "nvme_io": false, 00:43:24.239 "nvme_io_md": false, 00:43:24.239 "write_zeroes": true, 00:43:24.239 "zcopy": true, 00:43:24.239 "get_zone_info": false, 00:43:24.239 "zone_management": false, 00:43:24.239 "zone_append": false, 00:43:24.239 "compare": false, 00:43:24.239 "compare_and_write": false, 00:43:24.239 "abort": true, 00:43:24.239 "seek_hole": false, 00:43:24.239 "seek_data": false, 00:43:24.239 "copy": true, 00:43:24.239 "nvme_iov_md": false 00:43:24.239 }, 00:43:24.239 "memory_domains": [ 00:43:24.239 { 00:43:24.239 "dma_device_id": "system", 00:43:24.239 "dma_device_type": 1 00:43:24.239 }, 00:43:24.239 { 00:43:24.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:24.240 "dma_device_type": 2 00:43:24.240 } 00:43:24.240 ], 00:43:24.240 "driver_specific": {} 00:43:24.240 } 00:43:24.240 ] 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:24.240 "name": "Existed_Raid", 00:43:24.240 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:24.240 "strip_size_kb": 64, 00:43:24.240 "state": "online", 00:43:24.240 "raid_level": "raid5f", 00:43:24.240 "superblock": true, 00:43:24.240 "num_base_bdevs": 3, 00:43:24.240 "num_base_bdevs_discovered": 3, 00:43:24.240 "num_base_bdevs_operational": 3, 00:43:24.240 "base_bdevs_list": [ 00:43:24.240 { 00:43:24.240 "name": "NewBaseBdev", 00:43:24.240 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:24.240 "is_configured": true, 00:43:24.240 "data_offset": 2048, 00:43:24.240 "data_size": 63488 00:43:24.240 }, 00:43:24.240 { 00:43:24.240 "name": "BaseBdev2", 00:43:24.240 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:24.240 "is_configured": true, 00:43:24.240 "data_offset": 2048, 00:43:24.240 "data_size": 63488 00:43:24.240 }, 00:43:24.240 { 00:43:24.240 "name": "BaseBdev3", 00:43:24.240 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:24.240 "is_configured": true, 00:43:24.240 "data_offset": 2048, 00:43:24.240 "data_size": 63488 00:43:24.240 } 00:43:24.240 ] 00:43:24.240 }' 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:24.240 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:24.806 [2024-12-09 05:34:11.684385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.806 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:24.806 "name": "Existed_Raid", 00:43:24.806 "aliases": [ 00:43:24.806 "509bfa75-ecc7-4e38-8b93-ef3e5a021905" 00:43:24.806 ], 00:43:24.806 "product_name": "Raid Volume", 00:43:24.806 "block_size": 512, 00:43:24.806 "num_blocks": 126976, 00:43:24.806 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:24.806 "assigned_rate_limits": { 00:43:24.806 "rw_ios_per_sec": 0, 00:43:24.806 "rw_mbytes_per_sec": 0, 00:43:24.806 "r_mbytes_per_sec": 0, 00:43:24.806 "w_mbytes_per_sec": 0 00:43:24.806 }, 00:43:24.806 "claimed": false, 00:43:24.806 "zoned": false, 00:43:24.806 "supported_io_types": { 00:43:24.806 "read": true, 00:43:24.806 "write": true, 00:43:24.806 "unmap": false, 00:43:24.806 "flush": false, 00:43:24.806 "reset": true, 00:43:24.806 "nvme_admin": false, 00:43:24.806 "nvme_io": false, 00:43:24.807 "nvme_io_md": false, 00:43:24.807 "write_zeroes": true, 00:43:24.807 "zcopy": false, 00:43:24.807 "get_zone_info": false, 00:43:24.807 "zone_management": false, 00:43:24.807 "zone_append": false, 00:43:24.807 "compare": false, 00:43:24.807 "compare_and_write": false, 00:43:24.807 "abort": false, 00:43:24.807 "seek_hole": false, 00:43:24.807 "seek_data": false, 00:43:24.807 "copy": false, 00:43:24.807 "nvme_iov_md": false 00:43:24.807 }, 00:43:24.807 "driver_specific": { 00:43:24.807 "raid": { 00:43:24.807 "uuid": "509bfa75-ecc7-4e38-8b93-ef3e5a021905", 00:43:24.807 "strip_size_kb": 64, 00:43:24.807 "state": "online", 00:43:24.807 "raid_level": "raid5f", 00:43:24.807 "superblock": true, 00:43:24.807 "num_base_bdevs": 3, 00:43:24.807 "num_base_bdevs_discovered": 3, 00:43:24.807 "num_base_bdevs_operational": 3, 00:43:24.807 "base_bdevs_list": [ 00:43:24.807 { 00:43:24.807 "name": "NewBaseBdev", 00:43:24.807 "uuid": "a33ac875-646e-4227-91ef-b8d932ada266", 00:43:24.807 "is_configured": true, 00:43:24.807 "data_offset": 2048, 00:43:24.807 "data_size": 63488 00:43:24.807 }, 00:43:24.807 { 00:43:24.807 "name": "BaseBdev2", 00:43:24.807 "uuid": "c0823309-a021-4814-93cc-38577e7ece67", 00:43:24.807 "is_configured": true, 00:43:24.807 "data_offset": 2048, 00:43:24.807 "data_size": 63488 00:43:24.807 }, 00:43:24.807 { 00:43:24.807 "name": "BaseBdev3", 00:43:24.807 "uuid": "c8d2f19d-6a35-45eb-9c97-606045ef1ae9", 00:43:24.807 "is_configured": true, 00:43:24.807 "data_offset": 2048, 00:43:24.807 "data_size": 63488 00:43:24.807 } 00:43:24.807 ] 00:43:24.807 } 00:43:24.807 } 00:43:24.807 }' 00:43:24.807 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:43:25.066 BaseBdev2 00:43:25.066 BaseBdev3' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:25.066 [2024-12-09 05:34:11.996188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:25.066 [2024-12-09 05:34:11.996224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:25.066 [2024-12-09 05:34:11.996341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:25.066 [2024-12-09 05:34:11.996725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:25.066 [2024-12-09 05:34:11.996764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:43:25.066 05:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:25.066 05:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80968 00:43:25.066 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80968 ']' 00:43:25.066 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80968 00:43:25.066 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:43:25.066 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:25.066 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80968 00:43:25.323 killing process with pid 80968 00:43:25.323 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:25.323 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:25.323 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80968' 00:43:25.323 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80968 00:43:25.323 [2024-12-09 05:34:12.037593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:25.323 05:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80968 00:43:25.580 [2024-12-09 05:34:12.299632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:26.513 05:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:43:26.513 00:43:26.513 real 0m11.910s 00:43:26.513 user 0m19.606s 00:43:26.513 sys 0m1.782s 00:43:26.513 ************************************ 00:43:26.513 END TEST raid5f_state_function_test_sb 00:43:26.513 ************************************ 00:43:26.513 05:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:26.513 05:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:26.771 05:34:13 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:43:26.771 05:34:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:26.771 05:34:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:26.771 05:34:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:26.771 ************************************ 00:43:26.771 START TEST raid5f_superblock_test 00:43:26.771 ************************************ 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81603 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81603 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81603 ']' 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:26.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:26.771 05:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:26.771 [2024-12-09 05:34:13.653670] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:43:26.771 [2024-12-09 05:34:13.654200] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81603 ] 00:43:27.029 [2024-12-09 05:34:13.855621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:27.288 [2024-12-09 05:34:14.037817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.546 [2024-12-09 05:34:14.266883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:27.546 [2024-12-09 05:34:14.266942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:27.806 malloc1 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:27.806 [2024-12-09 05:34:14.727352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:27.806 [2024-12-09 05:34:14.727657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:27.806 [2024-12-09 05:34:14.727729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:27.806 [2024-12-09 05:34:14.727754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:27.806 [2024-12-09 05:34:14.730892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:27.806 [2024-12-09 05:34:14.730937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:27.806 pt1 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.806 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.066 malloc2 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.066 [2024-12-09 05:34:14.787155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:28.066 [2024-12-09 05:34:14.787607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:28.066 [2024-12-09 05:34:14.787705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:28.066 [2024-12-09 05:34:14.787729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:28.066 [2024-12-09 05:34:14.792030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:28.066 [2024-12-09 05:34:14.792091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:28.066 pt2 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.066 malloc3 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.066 [2024-12-09 05:34:14.867140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:28.066 [2024-12-09 05:34:14.867492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:28.066 [2024-12-09 05:34:14.867574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:43:28.066 [2024-12-09 05:34:14.867705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:28.066 [2024-12-09 05:34:14.871105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:28.066 [2024-12-09 05:34:14.871269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:28.066 pt3 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:28.066 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.067 [2024-12-09 05:34:14.879815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:28.067 [2024-12-09 05:34:14.882882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:28.067 [2024-12-09 05:34:14.882996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:28.067 [2024-12-09 05:34:14.883232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:28.067 [2024-12-09 05:34:14.883260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:28.067 [2024-12-09 05:34:14.883645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:28.067 [2024-12-09 05:34:14.889787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:28.067 [2024-12-09 05:34:14.889814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:28.067 [2024-12-09 05:34:14.890116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:28.067 "name": "raid_bdev1", 00:43:28.067 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:28.067 "strip_size_kb": 64, 00:43:28.067 "state": "online", 00:43:28.067 "raid_level": "raid5f", 00:43:28.067 "superblock": true, 00:43:28.067 "num_base_bdevs": 3, 00:43:28.067 "num_base_bdevs_discovered": 3, 00:43:28.067 "num_base_bdevs_operational": 3, 00:43:28.067 "base_bdevs_list": [ 00:43:28.067 { 00:43:28.067 "name": "pt1", 00:43:28.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:28.067 "is_configured": true, 00:43:28.067 "data_offset": 2048, 00:43:28.067 "data_size": 63488 00:43:28.067 }, 00:43:28.067 { 00:43:28.067 "name": "pt2", 00:43:28.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:28.067 "is_configured": true, 00:43:28.067 "data_offset": 2048, 00:43:28.067 "data_size": 63488 00:43:28.067 }, 00:43:28.067 { 00:43:28.067 "name": "pt3", 00:43:28.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:28.067 "is_configured": true, 00:43:28.067 "data_offset": 2048, 00:43:28.067 "data_size": 63488 00:43:28.067 } 00:43:28.067 ] 00:43:28.067 }' 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:28.067 05:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.643 [2024-12-09 05:34:15.429715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:28.643 "name": "raid_bdev1", 00:43:28.643 "aliases": [ 00:43:28.643 "64c795f7-389d-4489-903e-3b08ebfe4a97" 00:43:28.643 ], 00:43:28.643 "product_name": "Raid Volume", 00:43:28.643 "block_size": 512, 00:43:28.643 "num_blocks": 126976, 00:43:28.643 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:28.643 "assigned_rate_limits": { 00:43:28.643 "rw_ios_per_sec": 0, 00:43:28.643 "rw_mbytes_per_sec": 0, 00:43:28.643 "r_mbytes_per_sec": 0, 00:43:28.643 "w_mbytes_per_sec": 0 00:43:28.643 }, 00:43:28.643 "claimed": false, 00:43:28.643 "zoned": false, 00:43:28.643 "supported_io_types": { 00:43:28.643 "read": true, 00:43:28.643 "write": true, 00:43:28.643 "unmap": false, 00:43:28.643 "flush": false, 00:43:28.643 "reset": true, 00:43:28.643 "nvme_admin": false, 00:43:28.643 "nvme_io": false, 00:43:28.643 "nvme_io_md": false, 00:43:28.643 "write_zeroes": true, 00:43:28.643 "zcopy": false, 00:43:28.643 "get_zone_info": false, 00:43:28.643 "zone_management": false, 00:43:28.643 "zone_append": false, 00:43:28.643 "compare": false, 00:43:28.643 "compare_and_write": false, 00:43:28.643 "abort": false, 00:43:28.643 "seek_hole": false, 00:43:28.643 "seek_data": false, 00:43:28.643 "copy": false, 00:43:28.643 "nvme_iov_md": false 00:43:28.643 }, 00:43:28.643 "driver_specific": { 00:43:28.643 "raid": { 00:43:28.643 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:28.643 "strip_size_kb": 64, 00:43:28.643 "state": "online", 00:43:28.643 "raid_level": "raid5f", 00:43:28.643 "superblock": true, 00:43:28.643 "num_base_bdevs": 3, 00:43:28.643 "num_base_bdevs_discovered": 3, 00:43:28.643 "num_base_bdevs_operational": 3, 00:43:28.643 "base_bdevs_list": [ 00:43:28.643 { 00:43:28.643 "name": "pt1", 00:43:28.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:28.643 "is_configured": true, 00:43:28.643 "data_offset": 2048, 00:43:28.643 "data_size": 63488 00:43:28.643 }, 00:43:28.643 { 00:43:28.643 "name": "pt2", 00:43:28.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:28.643 "is_configured": true, 00:43:28.643 "data_offset": 2048, 00:43:28.643 "data_size": 63488 00:43:28.643 }, 00:43:28.643 { 00:43:28.643 "name": "pt3", 00:43:28.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:28.643 "is_configured": true, 00:43:28.643 "data_offset": 2048, 00:43:28.643 "data_size": 63488 00:43:28.643 } 00:43:28.643 ] 00:43:28.643 } 00:43:28.643 } 00:43:28.643 }' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:28.643 pt2 00:43:28.643 pt3' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.643 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:43:28.901 [2024-12-09 05:34:15.729636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64c795f7-389d-4489-903e-3b08ebfe4a97 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64c795f7-389d-4489-903e-3b08ebfe4a97 ']' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.901 [2024-12-09 05:34:15.785450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:28.901 [2024-12-09 05:34:15.785485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:28.901 [2024-12-09 05:34:15.785575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:28.901 [2024-12-09 05:34:15.785687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:28.901 [2024-12-09 05:34:15.785702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:43:28.901 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.902 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.160 [2024-12-09 05:34:15.929539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:43:29.160 [2024-12-09 05:34:15.932320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:43:29.160 [2024-12-09 05:34:15.932392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:43:29.160 [2024-12-09 05:34:15.932480] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:43:29.160 [2024-12-09 05:34:15.932581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:43:29.160 [2024-12-09 05:34:15.932614] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:43:29.160 [2024-12-09 05:34:15.932641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:29.160 [2024-12-09 05:34:15.932655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:43:29.160 request: 00:43:29.160 { 00:43:29.160 "name": "raid_bdev1", 00:43:29.160 "raid_level": "raid5f", 00:43:29.160 "base_bdevs": [ 00:43:29.160 "malloc1", 00:43:29.160 "malloc2", 00:43:29.160 "malloc3" 00:43:29.160 ], 00:43:29.160 "strip_size_kb": 64, 00:43:29.160 "superblock": false, 00:43:29.160 "method": "bdev_raid_create", 00:43:29.160 "req_id": 1 00:43:29.160 } 00:43:29.160 Got JSON-RPC error response 00:43:29.160 response: 00:43:29.160 { 00:43:29.160 "code": -17, 00:43:29.160 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:43:29.160 } 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:29.160 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.161 [2024-12-09 05:34:15.993557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:29.161 [2024-12-09 05:34:15.993644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:29.161 [2024-12-09 05:34:15.993673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:43:29.161 [2024-12-09 05:34:15.993687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:29.161 [2024-12-09 05:34:15.996901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:29.161 [2024-12-09 05:34:15.996947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:29.161 [2024-12-09 05:34:15.997048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:29.161 [2024-12-09 05:34:15.997141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:29.161 pt1 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:29.161 05:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:29.161 "name": "raid_bdev1", 00:43:29.161 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:29.161 "strip_size_kb": 64, 00:43:29.161 "state": "configuring", 00:43:29.161 "raid_level": "raid5f", 00:43:29.161 "superblock": true, 00:43:29.161 "num_base_bdevs": 3, 00:43:29.161 "num_base_bdevs_discovered": 1, 00:43:29.161 "num_base_bdevs_operational": 3, 00:43:29.161 "base_bdevs_list": [ 00:43:29.161 { 00:43:29.161 "name": "pt1", 00:43:29.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:29.161 "is_configured": true, 00:43:29.161 "data_offset": 2048, 00:43:29.161 "data_size": 63488 00:43:29.161 }, 00:43:29.161 { 00:43:29.161 "name": null, 00:43:29.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:29.161 "is_configured": false, 00:43:29.161 "data_offset": 2048, 00:43:29.161 "data_size": 63488 00:43:29.161 }, 00:43:29.161 { 00:43:29.161 "name": null, 00:43:29.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:29.161 "is_configured": false, 00:43:29.161 "data_offset": 2048, 00:43:29.161 "data_size": 63488 00:43:29.161 } 00:43:29.161 ] 00:43:29.161 }' 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:29.161 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.726 [2024-12-09 05:34:16.493756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:29.726 [2024-12-09 05:34:16.493898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:29.726 [2024-12-09 05:34:16.493936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:43:29.726 [2024-12-09 05:34:16.493953] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:29.726 [2024-12-09 05:34:16.494670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:29.726 [2024-12-09 05:34:16.494732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:29.726 [2024-12-09 05:34:16.494867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:29.726 [2024-12-09 05:34:16.494911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:29.726 pt2 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.726 [2024-12-09 05:34:16.501692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:29.726 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:29.727 "name": "raid_bdev1", 00:43:29.727 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:29.727 "strip_size_kb": 64, 00:43:29.727 "state": "configuring", 00:43:29.727 "raid_level": "raid5f", 00:43:29.727 "superblock": true, 00:43:29.727 "num_base_bdevs": 3, 00:43:29.727 "num_base_bdevs_discovered": 1, 00:43:29.727 "num_base_bdevs_operational": 3, 00:43:29.727 "base_bdevs_list": [ 00:43:29.727 { 00:43:29.727 "name": "pt1", 00:43:29.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:29.727 "is_configured": true, 00:43:29.727 "data_offset": 2048, 00:43:29.727 "data_size": 63488 00:43:29.727 }, 00:43:29.727 { 00:43:29.727 "name": null, 00:43:29.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:29.727 "is_configured": false, 00:43:29.727 "data_offset": 0, 00:43:29.727 "data_size": 63488 00:43:29.727 }, 00:43:29.727 { 00:43:29.727 "name": null, 00:43:29.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:29.727 "is_configured": false, 00:43:29.727 "data_offset": 2048, 00:43:29.727 "data_size": 63488 00:43:29.727 } 00:43:29.727 ] 00:43:29.727 }' 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:29.727 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.294 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:43:30.294 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:30.294 05:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:30.294 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.294 05:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.294 [2024-12-09 05:34:17.005913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:30.294 [2024-12-09 05:34:17.006020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:30.294 [2024-12-09 05:34:17.006049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:43:30.294 [2024-12-09 05:34:17.006065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:30.294 [2024-12-09 05:34:17.006724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:30.294 [2024-12-09 05:34:17.006756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:30.294 [2024-12-09 05:34:17.006910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:30.294 [2024-12-09 05:34:17.006959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:30.294 pt2 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.294 [2024-12-09 05:34:17.017839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:30.294 [2024-12-09 05:34:17.017906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:30.294 [2024-12-09 05:34:17.017925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:43:30.294 [2024-12-09 05:34:17.017940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:30.294 [2024-12-09 05:34:17.018352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:30.294 [2024-12-09 05:34:17.018386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:30.294 [2024-12-09 05:34:17.018495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:43:30.294 [2024-12-09 05:34:17.018553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:30.294 [2024-12-09 05:34:17.018717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:30.294 [2024-12-09 05:34:17.018747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:30.294 [2024-12-09 05:34:17.019134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:30.294 [2024-12-09 05:34:17.023577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:30.294 pt3 00:43:30.294 [2024-12-09 05:34:17.023758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:43:30.294 [2024-12-09 05:34:17.024065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:30.294 "name": "raid_bdev1", 00:43:30.294 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:30.294 "strip_size_kb": 64, 00:43:30.294 "state": "online", 00:43:30.294 "raid_level": "raid5f", 00:43:30.294 "superblock": true, 00:43:30.294 "num_base_bdevs": 3, 00:43:30.294 "num_base_bdevs_discovered": 3, 00:43:30.294 "num_base_bdevs_operational": 3, 00:43:30.294 "base_bdevs_list": [ 00:43:30.294 { 00:43:30.294 "name": "pt1", 00:43:30.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:30.294 "is_configured": true, 00:43:30.294 "data_offset": 2048, 00:43:30.294 "data_size": 63488 00:43:30.294 }, 00:43:30.294 { 00:43:30.294 "name": "pt2", 00:43:30.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:30.294 "is_configured": true, 00:43:30.294 "data_offset": 2048, 00:43:30.294 "data_size": 63488 00:43:30.294 }, 00:43:30.294 { 00:43:30.294 "name": "pt3", 00:43:30.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:30.294 "is_configured": true, 00:43:30.294 "data_offset": 2048, 00:43:30.294 "data_size": 63488 00:43:30.294 } 00:43:30.294 ] 00:43:30.294 }' 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:30.294 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:30.861 [2024-12-09 05:34:17.562104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:30.861 "name": "raid_bdev1", 00:43:30.861 "aliases": [ 00:43:30.861 "64c795f7-389d-4489-903e-3b08ebfe4a97" 00:43:30.861 ], 00:43:30.861 "product_name": "Raid Volume", 00:43:30.861 "block_size": 512, 00:43:30.861 "num_blocks": 126976, 00:43:30.861 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:30.861 "assigned_rate_limits": { 00:43:30.861 "rw_ios_per_sec": 0, 00:43:30.861 "rw_mbytes_per_sec": 0, 00:43:30.861 "r_mbytes_per_sec": 0, 00:43:30.861 "w_mbytes_per_sec": 0 00:43:30.861 }, 00:43:30.861 "claimed": false, 00:43:30.861 "zoned": false, 00:43:30.861 "supported_io_types": { 00:43:30.861 "read": true, 00:43:30.861 "write": true, 00:43:30.861 "unmap": false, 00:43:30.861 "flush": false, 00:43:30.861 "reset": true, 00:43:30.861 "nvme_admin": false, 00:43:30.861 "nvme_io": false, 00:43:30.861 "nvme_io_md": false, 00:43:30.861 "write_zeroes": true, 00:43:30.861 "zcopy": false, 00:43:30.861 "get_zone_info": false, 00:43:30.861 "zone_management": false, 00:43:30.861 "zone_append": false, 00:43:30.861 "compare": false, 00:43:30.861 "compare_and_write": false, 00:43:30.861 "abort": false, 00:43:30.861 "seek_hole": false, 00:43:30.861 "seek_data": false, 00:43:30.861 "copy": false, 00:43:30.861 "nvme_iov_md": false 00:43:30.861 }, 00:43:30.861 "driver_specific": { 00:43:30.861 "raid": { 00:43:30.861 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:30.861 "strip_size_kb": 64, 00:43:30.861 "state": "online", 00:43:30.861 "raid_level": "raid5f", 00:43:30.861 "superblock": true, 00:43:30.861 "num_base_bdevs": 3, 00:43:30.861 "num_base_bdevs_discovered": 3, 00:43:30.861 "num_base_bdevs_operational": 3, 00:43:30.861 "base_bdevs_list": [ 00:43:30.861 { 00:43:30.861 "name": "pt1", 00:43:30.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:30.861 "is_configured": true, 00:43:30.861 "data_offset": 2048, 00:43:30.861 "data_size": 63488 00:43:30.861 }, 00:43:30.861 { 00:43:30.861 "name": "pt2", 00:43:30.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:30.861 "is_configured": true, 00:43:30.861 "data_offset": 2048, 00:43:30.861 "data_size": 63488 00:43:30.861 }, 00:43:30.861 { 00:43:30.861 "name": "pt3", 00:43:30.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:30.861 "is_configured": true, 00:43:30.861 "data_offset": 2048, 00:43:30.861 "data_size": 63488 00:43:30.861 } 00:43:30.861 ] 00:43:30.861 } 00:43:30.861 } 00:43:30.861 }' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:30.861 pt2 00:43:30.861 pt3' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:30.861 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.862 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:30.862 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:30.862 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.121 [2024-12-09 05:34:17.894178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64c795f7-389d-4489-903e-3b08ebfe4a97 '!=' 64c795f7-389d-4489-903e-3b08ebfe4a97 ']' 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.121 [2024-12-09 05:34:17.946114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:31.121 05:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.121 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:31.121 "name": "raid_bdev1", 00:43:31.121 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:31.121 "strip_size_kb": 64, 00:43:31.121 "state": "online", 00:43:31.121 "raid_level": "raid5f", 00:43:31.121 "superblock": true, 00:43:31.121 "num_base_bdevs": 3, 00:43:31.121 "num_base_bdevs_discovered": 2, 00:43:31.121 "num_base_bdevs_operational": 2, 00:43:31.121 "base_bdevs_list": [ 00:43:31.121 { 00:43:31.121 "name": null, 00:43:31.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:31.121 "is_configured": false, 00:43:31.121 "data_offset": 0, 00:43:31.121 "data_size": 63488 00:43:31.121 }, 00:43:31.121 { 00:43:31.121 "name": "pt2", 00:43:31.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:31.121 "is_configured": true, 00:43:31.121 "data_offset": 2048, 00:43:31.121 "data_size": 63488 00:43:31.121 }, 00:43:31.121 { 00:43:31.121 "name": "pt3", 00:43:31.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:31.121 "is_configured": true, 00:43:31.121 "data_offset": 2048, 00:43:31.121 "data_size": 63488 00:43:31.121 } 00:43:31.121 ] 00:43:31.121 }' 00:43:31.121 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:31.121 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.693 [2024-12-09 05:34:18.474080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:31.693 [2024-12-09 05:34:18.474156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:31.693 [2024-12-09 05:34:18.474251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:31.693 [2024-12-09 05:34:18.474370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:31.693 [2024-12-09 05:34:18.474391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.693 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.693 [2024-12-09 05:34:18.554104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:31.693 [2024-12-09 05:34:18.554230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:31.693 [2024-12-09 05:34:18.554255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:43:31.693 [2024-12-09 05:34:18.554269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:31.693 [2024-12-09 05:34:18.557376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:31.694 [2024-12-09 05:34:18.557433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:31.694 [2024-12-09 05:34:18.557519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:31.694 [2024-12-09 05:34:18.557580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:31.694 pt2 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:31.694 "name": "raid_bdev1", 00:43:31.694 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:31.694 "strip_size_kb": 64, 00:43:31.694 "state": "configuring", 00:43:31.694 "raid_level": "raid5f", 00:43:31.694 "superblock": true, 00:43:31.694 "num_base_bdevs": 3, 00:43:31.694 "num_base_bdevs_discovered": 1, 00:43:31.694 "num_base_bdevs_operational": 2, 00:43:31.694 "base_bdevs_list": [ 00:43:31.694 { 00:43:31.694 "name": null, 00:43:31.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:31.694 "is_configured": false, 00:43:31.694 "data_offset": 2048, 00:43:31.694 "data_size": 63488 00:43:31.694 }, 00:43:31.694 { 00:43:31.694 "name": "pt2", 00:43:31.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:31.694 "is_configured": true, 00:43:31.694 "data_offset": 2048, 00:43:31.694 "data_size": 63488 00:43:31.694 }, 00:43:31.694 { 00:43:31.694 "name": null, 00:43:31.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:31.694 "is_configured": false, 00:43:31.694 "data_offset": 2048, 00:43:31.694 "data_size": 63488 00:43:31.694 } 00:43:31.694 ] 00:43:31.694 }' 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:31.694 05:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.259 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:43:32.259 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:43:32.259 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:43:32.259 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:32.259 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.259 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.259 [2024-12-09 05:34:19.102381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:32.259 [2024-12-09 05:34:19.102499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:32.259 [2024-12-09 05:34:19.102558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:43:32.259 [2024-12-09 05:34:19.102577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:32.259 [2024-12-09 05:34:19.103322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:32.259 [2024-12-09 05:34:19.103534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:32.259 [2024-12-09 05:34:19.103663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:43:32.259 [2024-12-09 05:34:19.103711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:32.259 [2024-12-09 05:34:19.103938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:43:32.259 [2024-12-09 05:34:19.103961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:32.259 [2024-12-09 05:34:19.104331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:43:32.259 [2024-12-09 05:34:19.109509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:43:32.259 [2024-12-09 05:34:19.109666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:43:32.260 [2024-12-09 05:34:19.110279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:32.260 pt3 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:32.260 "name": "raid_bdev1", 00:43:32.260 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:32.260 "strip_size_kb": 64, 00:43:32.260 "state": "online", 00:43:32.260 "raid_level": "raid5f", 00:43:32.260 "superblock": true, 00:43:32.260 "num_base_bdevs": 3, 00:43:32.260 "num_base_bdevs_discovered": 2, 00:43:32.260 "num_base_bdevs_operational": 2, 00:43:32.260 "base_bdevs_list": [ 00:43:32.260 { 00:43:32.260 "name": null, 00:43:32.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:32.260 "is_configured": false, 00:43:32.260 "data_offset": 2048, 00:43:32.260 "data_size": 63488 00:43:32.260 }, 00:43:32.260 { 00:43:32.260 "name": "pt2", 00:43:32.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:32.260 "is_configured": true, 00:43:32.260 "data_offset": 2048, 00:43:32.260 "data_size": 63488 00:43:32.260 }, 00:43:32.260 { 00:43:32.260 "name": "pt3", 00:43:32.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:32.260 "is_configured": true, 00:43:32.260 "data_offset": 2048, 00:43:32.260 "data_size": 63488 00:43:32.260 } 00:43:32.260 ] 00:43:32.260 }' 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:32.260 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.827 [2024-12-09 05:34:19.656666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:32.827 [2024-12-09 05:34:19.656706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:32.827 [2024-12-09 05:34:19.656846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:32.827 [2024-12-09 05:34:19.656974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:32.827 [2024-12-09 05:34:19.656992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.827 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.827 [2024-12-09 05:34:19.724663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:32.828 [2024-12-09 05:34:19.724757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:32.828 [2024-12-09 05:34:19.724811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:43:32.828 [2024-12-09 05:34:19.724841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:32.828 [2024-12-09 05:34:19.727929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:32.828 [2024-12-09 05:34:19.727972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:32.828 [2024-12-09 05:34:19.728073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:32.828 [2024-12-09 05:34:19.728179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:32.828 [2024-12-09 05:34:19.728340] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:43:32.828 [2024-12-09 05:34:19.728358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:32.828 [2024-12-09 05:34:19.728380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:43:32.828 [2024-12-09 05:34:19.728437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:32.828 pt1 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:32.828 "name": "raid_bdev1", 00:43:32.828 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:32.828 "strip_size_kb": 64, 00:43:32.828 "state": "configuring", 00:43:32.828 "raid_level": "raid5f", 00:43:32.828 "superblock": true, 00:43:32.828 "num_base_bdevs": 3, 00:43:32.828 "num_base_bdevs_discovered": 1, 00:43:32.828 "num_base_bdevs_operational": 2, 00:43:32.828 "base_bdevs_list": [ 00:43:32.828 { 00:43:32.828 "name": null, 00:43:32.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:32.828 "is_configured": false, 00:43:32.828 "data_offset": 2048, 00:43:32.828 "data_size": 63488 00:43:32.828 }, 00:43:32.828 { 00:43:32.828 "name": "pt2", 00:43:32.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:32.828 "is_configured": true, 00:43:32.828 "data_offset": 2048, 00:43:32.828 "data_size": 63488 00:43:32.828 }, 00:43:32.828 { 00:43:32.828 "name": null, 00:43:32.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:32.828 "is_configured": false, 00:43:32.828 "data_offset": 2048, 00:43:32.828 "data_size": 63488 00:43:32.828 } 00:43:32.828 ] 00:43:32.828 }' 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:32.828 05:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:33.394 [2024-12-09 05:34:20.336865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:33.394 [2024-12-09 05:34:20.336957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:33.394 [2024-12-09 05:34:20.336990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:43:33.394 [2024-12-09 05:34:20.337004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:33.394 [2024-12-09 05:34:20.337699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:33.394 [2024-12-09 05:34:20.337754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:33.394 [2024-12-09 05:34:20.337895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:43:33.394 [2024-12-09 05:34:20.337944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:33.394 [2024-12-09 05:34:20.338123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:43:33.394 [2024-12-09 05:34:20.338174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:33.394 [2024-12-09 05:34:20.338535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:43:33.394 [2024-12-09 05:34:20.343546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:43:33.394 pt3 00:43:33.394 [2024-12-09 05:34:20.343763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:43:33.394 [2024-12-09 05:34:20.344121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:33.394 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.652 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:33.652 "name": "raid_bdev1", 00:43:33.652 "uuid": "64c795f7-389d-4489-903e-3b08ebfe4a97", 00:43:33.652 "strip_size_kb": 64, 00:43:33.652 "state": "online", 00:43:33.652 "raid_level": "raid5f", 00:43:33.652 "superblock": true, 00:43:33.652 "num_base_bdevs": 3, 00:43:33.652 "num_base_bdevs_discovered": 2, 00:43:33.652 "num_base_bdevs_operational": 2, 00:43:33.652 "base_bdevs_list": [ 00:43:33.652 { 00:43:33.652 "name": null, 00:43:33.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:33.652 "is_configured": false, 00:43:33.652 "data_offset": 2048, 00:43:33.652 "data_size": 63488 00:43:33.652 }, 00:43:33.652 { 00:43:33.652 "name": "pt2", 00:43:33.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:33.652 "is_configured": true, 00:43:33.652 "data_offset": 2048, 00:43:33.652 "data_size": 63488 00:43:33.652 }, 00:43:33.652 { 00:43:33.652 "name": "pt3", 00:43:33.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:43:33.652 "is_configured": true, 00:43:33.652 "data_offset": 2048, 00:43:33.652 "data_size": 63488 00:43:33.652 } 00:43:33.652 ] 00:43:33.652 }' 00:43:33.652 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:33.652 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:34.219 [2024-12-09 05:34:20.950423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 64c795f7-389d-4489-903e-3b08ebfe4a97 '!=' 64c795f7-389d-4489-903e-3b08ebfe4a97 ']' 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81603 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81603 ']' 00:43:34.219 05:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81603 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81603 00:43:34.219 killing process with pid 81603 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81603' 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81603 00:43:34.219 [2024-12-09 05:34:21.034603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:34.219 05:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81603 00:43:34.219 [2024-12-09 05:34:21.034727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:34.219 [2024-12-09 05:34:21.034830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:34.219 [2024-12-09 05:34:21.034867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:43:34.478 [2024-12-09 05:34:21.298634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:35.855 05:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:43:35.855 ************************************ 00:43:35.855 END TEST raid5f_superblock_test 00:43:35.855 ************************************ 00:43:35.855 00:43:35.855 real 0m8.913s 00:43:35.855 user 0m14.430s 00:43:35.855 sys 0m1.360s 00:43:35.855 05:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.855 05:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:43:35.855 05:34:22 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:43:35.855 05:34:22 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:43:35.855 05:34:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:35.855 05:34:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:35.855 05:34:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:35.855 ************************************ 00:43:35.855 START TEST raid5f_rebuild_test 00:43:35.855 ************************************ 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:35.855 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82062 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82062 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82062 ']' 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:35.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:35.856 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:35.856 [2024-12-09 05:34:22.618540] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:43:35.856 [2024-12-09 05:34:22.619004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82062 ] 00:43:35.856 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:35.856 Zero copy mechanism will not be used. 00:43:35.856 [2024-12-09 05:34:22.809844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:36.145 [2024-12-09 05:34:22.940945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:36.403 [2024-12-09 05:34:23.147816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:36.403 [2024-12-09 05:34:23.148170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:36.660 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:36.660 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:43:36.660 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:36.660 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:43:36.660 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.660 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 BaseBdev1_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 [2024-12-09 05:34:23.680198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:36.918 [2024-12-09 05:34:23.680289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:36.918 [2024-12-09 05:34:23.680321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:36.918 [2024-12-09 05:34:23.680338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:36.918 [2024-12-09 05:34:23.683432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:36.918 [2024-12-09 05:34:23.683686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:36.918 BaseBdev1 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 BaseBdev2_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 [2024-12-09 05:34:23.736613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:36.918 [2024-12-09 05:34:23.736934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:36.918 [2024-12-09 05:34:23.736981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:36.918 [2024-12-09 05:34:23.737002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:36.918 [2024-12-09 05:34:23.740041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:36.918 [2024-12-09 05:34:23.740300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:36.918 BaseBdev2 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 BaseBdev3_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 [2024-12-09 05:34:23.799324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:43:36.918 [2024-12-09 05:34:23.799580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:36.918 [2024-12-09 05:34:23.799625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:43:36.918 [2024-12-09 05:34:23.799646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:36.918 [2024-12-09 05:34:23.802646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:36.918 [2024-12-09 05:34:23.802852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:43:36.918 BaseBdev3 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 spare_malloc 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 spare_delay 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.918 [2024-12-09 05:34:23.864665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:36.918 [2024-12-09 05:34:23.864766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:36.918 [2024-12-09 05:34:23.864823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:43:36.918 [2024-12-09 05:34:23.864843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:36.918 [2024-12-09 05:34:23.867982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:36.918 [2024-12-09 05:34:23.868049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:36.918 spare 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.918 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:36.919 [2024-12-09 05:34:23.872941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:36.919 [2024-12-09 05:34:23.875618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:36.919 [2024-12-09 05:34:23.875952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:36.919 [2024-12-09 05:34:23.876117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:36.919 [2024-12-09 05:34:23.876137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:43:36.919 [2024-12-09 05:34:23.876521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:36.919 [2024-12-09 05:34:23.881771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:36.919 [2024-12-09 05:34:23.881840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:36.919 [2024-12-09 05:34:23.882202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:36.919 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:37.177 "name": "raid_bdev1", 00:43:37.177 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:37.177 "strip_size_kb": 64, 00:43:37.177 "state": "online", 00:43:37.177 "raid_level": "raid5f", 00:43:37.177 "superblock": false, 00:43:37.177 "num_base_bdevs": 3, 00:43:37.177 "num_base_bdevs_discovered": 3, 00:43:37.177 "num_base_bdevs_operational": 3, 00:43:37.177 "base_bdevs_list": [ 00:43:37.177 { 00:43:37.177 "name": "BaseBdev1", 00:43:37.177 "uuid": "c1b284f5-0582-5232-9a83-01b85e80eafc", 00:43:37.177 "is_configured": true, 00:43:37.177 "data_offset": 0, 00:43:37.177 "data_size": 65536 00:43:37.177 }, 00:43:37.177 { 00:43:37.177 "name": "BaseBdev2", 00:43:37.177 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:37.177 "is_configured": true, 00:43:37.177 "data_offset": 0, 00:43:37.177 "data_size": 65536 00:43:37.177 }, 00:43:37.177 { 00:43:37.177 "name": "BaseBdev3", 00:43:37.177 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:37.177 "is_configured": true, 00:43:37.177 "data_offset": 0, 00:43:37.177 "data_size": 65536 00:43:37.177 } 00:43:37.177 ] 00:43:37.177 }' 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:37.177 05:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:37.741 [2024-12-09 05:34:24.432574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:37.741 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:43:37.998 [2024-12-09 05:34:24.796554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:43:37.998 /dev/nbd0 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:37.998 1+0 records in 00:43:37.998 1+0 records out 00:43:37.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519668 s, 7.9 MB/s 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:43:37.998 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:43:38.570 512+0 records in 00:43:38.570 512+0 records out 00:43:38.570 67108864 bytes (67 MB, 64 MiB) copied, 0.492983 s, 136 MB/s 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:38.570 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:43:38.827 [2024-12-09 05:34:25.676402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:38.827 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:38.827 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:38.827 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:38.827 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:38.827 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:38.827 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:38.828 [2024-12-09 05:34:25.694440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:38.828 "name": "raid_bdev1", 00:43:38.828 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:38.828 "strip_size_kb": 64, 00:43:38.828 "state": "online", 00:43:38.828 "raid_level": "raid5f", 00:43:38.828 "superblock": false, 00:43:38.828 "num_base_bdevs": 3, 00:43:38.828 "num_base_bdevs_discovered": 2, 00:43:38.828 "num_base_bdevs_operational": 2, 00:43:38.828 "base_bdevs_list": [ 00:43:38.828 { 00:43:38.828 "name": null, 00:43:38.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:38.828 "is_configured": false, 00:43:38.828 "data_offset": 0, 00:43:38.828 "data_size": 65536 00:43:38.828 }, 00:43:38.828 { 00:43:38.828 "name": "BaseBdev2", 00:43:38.828 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:38.828 "is_configured": true, 00:43:38.828 "data_offset": 0, 00:43:38.828 "data_size": 65536 00:43:38.828 }, 00:43:38.828 { 00:43:38.828 "name": "BaseBdev3", 00:43:38.828 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:38.828 "is_configured": true, 00:43:38.828 "data_offset": 0, 00:43:38.828 "data_size": 65536 00:43:38.828 } 00:43:38.828 ] 00:43:38.828 }' 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:38.828 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:39.393 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:39.393 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.393 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:39.393 [2024-12-09 05:34:26.210701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:39.393 [2024-12-09 05:34:26.226659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:43:39.393 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.393 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:43:39.393 [2024-12-09 05:34:26.234002] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:40.327 "name": "raid_bdev1", 00:43:40.327 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:40.327 "strip_size_kb": 64, 00:43:40.327 "state": "online", 00:43:40.327 "raid_level": "raid5f", 00:43:40.327 "superblock": false, 00:43:40.327 "num_base_bdevs": 3, 00:43:40.327 "num_base_bdevs_discovered": 3, 00:43:40.327 "num_base_bdevs_operational": 3, 00:43:40.327 "process": { 00:43:40.327 "type": "rebuild", 00:43:40.327 "target": "spare", 00:43:40.327 "progress": { 00:43:40.327 "blocks": 18432, 00:43:40.327 "percent": 14 00:43:40.327 } 00:43:40.327 }, 00:43:40.327 "base_bdevs_list": [ 00:43:40.327 { 00:43:40.327 "name": "spare", 00:43:40.327 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:40.327 "is_configured": true, 00:43:40.327 "data_offset": 0, 00:43:40.327 "data_size": 65536 00:43:40.327 }, 00:43:40.327 { 00:43:40.327 "name": "BaseBdev2", 00:43:40.327 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:40.327 "is_configured": true, 00:43:40.327 "data_offset": 0, 00:43:40.327 "data_size": 65536 00:43:40.327 }, 00:43:40.327 { 00:43:40.327 "name": "BaseBdev3", 00:43:40.327 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:40.327 "is_configured": true, 00:43:40.327 "data_offset": 0, 00:43:40.327 "data_size": 65536 00:43:40.327 } 00:43:40.327 ] 00:43:40.327 }' 00:43:40.327 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:40.586 [2024-12-09 05:34:27.407618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:40.586 [2024-12-09 05:34:27.447588] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:40.586 [2024-12-09 05:34:27.447673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:40.586 [2024-12-09 05:34:27.447700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:40.586 [2024-12-09 05:34:27.447712] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:40.586 "name": "raid_bdev1", 00:43:40.586 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:40.586 "strip_size_kb": 64, 00:43:40.586 "state": "online", 00:43:40.586 "raid_level": "raid5f", 00:43:40.586 "superblock": false, 00:43:40.586 "num_base_bdevs": 3, 00:43:40.586 "num_base_bdevs_discovered": 2, 00:43:40.586 "num_base_bdevs_operational": 2, 00:43:40.586 "base_bdevs_list": [ 00:43:40.586 { 00:43:40.586 "name": null, 00:43:40.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:40.586 "is_configured": false, 00:43:40.586 "data_offset": 0, 00:43:40.586 "data_size": 65536 00:43:40.586 }, 00:43:40.586 { 00:43:40.586 "name": "BaseBdev2", 00:43:40.586 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:40.586 "is_configured": true, 00:43:40.586 "data_offset": 0, 00:43:40.586 "data_size": 65536 00:43:40.586 }, 00:43:40.586 { 00:43:40.586 "name": "BaseBdev3", 00:43:40.586 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:40.586 "is_configured": true, 00:43:40.586 "data_offset": 0, 00:43:40.586 "data_size": 65536 00:43:40.586 } 00:43:40.586 ] 00:43:40.586 }' 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:40.586 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:41.153 "name": "raid_bdev1", 00:43:41.153 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:41.153 "strip_size_kb": 64, 00:43:41.153 "state": "online", 00:43:41.153 "raid_level": "raid5f", 00:43:41.153 "superblock": false, 00:43:41.153 "num_base_bdevs": 3, 00:43:41.153 "num_base_bdevs_discovered": 2, 00:43:41.153 "num_base_bdevs_operational": 2, 00:43:41.153 "base_bdevs_list": [ 00:43:41.153 { 00:43:41.153 "name": null, 00:43:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:41.153 "is_configured": false, 00:43:41.153 "data_offset": 0, 00:43:41.153 "data_size": 65536 00:43:41.153 }, 00:43:41.153 { 00:43:41.153 "name": "BaseBdev2", 00:43:41.153 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:41.153 "is_configured": true, 00:43:41.153 "data_offset": 0, 00:43:41.153 "data_size": 65536 00:43:41.153 }, 00:43:41.153 { 00:43:41.153 "name": "BaseBdev3", 00:43:41.153 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:41.153 "is_configured": true, 00:43:41.153 "data_offset": 0, 00:43:41.153 "data_size": 65536 00:43:41.153 } 00:43:41.153 ] 00:43:41.153 }' 00:43:41.153 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:41.412 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:41.412 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:41.413 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:41.413 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:41.413 05:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.413 05:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:41.413 [2024-12-09 05:34:28.197532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:41.413 [2024-12-09 05:34:28.212730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:43:41.413 05:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.413 05:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:43:41.413 [2024-12-09 05:34:28.220266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.348 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:42.348 "name": "raid_bdev1", 00:43:42.348 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:42.348 "strip_size_kb": 64, 00:43:42.348 "state": "online", 00:43:42.348 "raid_level": "raid5f", 00:43:42.348 "superblock": false, 00:43:42.348 "num_base_bdevs": 3, 00:43:42.348 "num_base_bdevs_discovered": 3, 00:43:42.348 "num_base_bdevs_operational": 3, 00:43:42.348 "process": { 00:43:42.348 "type": "rebuild", 00:43:42.348 "target": "spare", 00:43:42.348 "progress": { 00:43:42.348 "blocks": 18432, 00:43:42.348 "percent": 14 00:43:42.348 } 00:43:42.348 }, 00:43:42.348 "base_bdevs_list": [ 00:43:42.348 { 00:43:42.348 "name": "spare", 00:43:42.348 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:42.349 "is_configured": true, 00:43:42.349 "data_offset": 0, 00:43:42.349 "data_size": 65536 00:43:42.349 }, 00:43:42.349 { 00:43:42.349 "name": "BaseBdev2", 00:43:42.349 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:42.349 "is_configured": true, 00:43:42.349 "data_offset": 0, 00:43:42.349 "data_size": 65536 00:43:42.349 }, 00:43:42.349 { 00:43:42.349 "name": "BaseBdev3", 00:43:42.349 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:42.349 "is_configured": true, 00:43:42.349 "data_offset": 0, 00:43:42.349 "data_size": 65536 00:43:42.349 } 00:43:42.349 ] 00:43:42.349 }' 00:43:42.349 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=605 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:42.608 "name": "raid_bdev1", 00:43:42.608 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:42.608 "strip_size_kb": 64, 00:43:42.608 "state": "online", 00:43:42.608 "raid_level": "raid5f", 00:43:42.608 "superblock": false, 00:43:42.608 "num_base_bdevs": 3, 00:43:42.608 "num_base_bdevs_discovered": 3, 00:43:42.608 "num_base_bdevs_operational": 3, 00:43:42.608 "process": { 00:43:42.608 "type": "rebuild", 00:43:42.608 "target": "spare", 00:43:42.608 "progress": { 00:43:42.608 "blocks": 22528, 00:43:42.608 "percent": 17 00:43:42.608 } 00:43:42.608 }, 00:43:42.608 "base_bdevs_list": [ 00:43:42.608 { 00:43:42.608 "name": "spare", 00:43:42.608 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:42.608 "is_configured": true, 00:43:42.608 "data_offset": 0, 00:43:42.608 "data_size": 65536 00:43:42.608 }, 00:43:42.608 { 00:43:42.608 "name": "BaseBdev2", 00:43:42.608 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:42.608 "is_configured": true, 00:43:42.608 "data_offset": 0, 00:43:42.608 "data_size": 65536 00:43:42.608 }, 00:43:42.608 { 00:43:42.608 "name": "BaseBdev3", 00:43:42.608 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:42.608 "is_configured": true, 00:43:42.608 "data_offset": 0, 00:43:42.608 "data_size": 65536 00:43:42.608 } 00:43:42.608 ] 00:43:42.608 }' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:42.608 05:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.587 05:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:43.845 05:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.845 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:43.845 "name": "raid_bdev1", 00:43:43.845 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:43.845 "strip_size_kb": 64, 00:43:43.845 "state": "online", 00:43:43.845 "raid_level": "raid5f", 00:43:43.845 "superblock": false, 00:43:43.845 "num_base_bdevs": 3, 00:43:43.845 "num_base_bdevs_discovered": 3, 00:43:43.845 "num_base_bdevs_operational": 3, 00:43:43.845 "process": { 00:43:43.845 "type": "rebuild", 00:43:43.845 "target": "spare", 00:43:43.845 "progress": { 00:43:43.845 "blocks": 47104, 00:43:43.845 "percent": 35 00:43:43.845 } 00:43:43.845 }, 00:43:43.845 "base_bdevs_list": [ 00:43:43.845 { 00:43:43.845 "name": "spare", 00:43:43.845 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:43.845 "is_configured": true, 00:43:43.845 "data_offset": 0, 00:43:43.845 "data_size": 65536 00:43:43.845 }, 00:43:43.845 { 00:43:43.845 "name": "BaseBdev2", 00:43:43.845 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:43.845 "is_configured": true, 00:43:43.846 "data_offset": 0, 00:43:43.846 "data_size": 65536 00:43:43.846 }, 00:43:43.846 { 00:43:43.846 "name": "BaseBdev3", 00:43:43.846 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:43.846 "is_configured": true, 00:43:43.846 "data_offset": 0, 00:43:43.846 "data_size": 65536 00:43:43.846 } 00:43:43.846 ] 00:43:43.846 }' 00:43:43.846 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:43.846 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:43.846 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:43.846 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:43.846 05:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:44.782 05:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.041 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:45.041 "name": "raid_bdev1", 00:43:45.041 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:45.041 "strip_size_kb": 64, 00:43:45.041 "state": "online", 00:43:45.041 "raid_level": "raid5f", 00:43:45.041 "superblock": false, 00:43:45.041 "num_base_bdevs": 3, 00:43:45.041 "num_base_bdevs_discovered": 3, 00:43:45.041 "num_base_bdevs_operational": 3, 00:43:45.041 "process": { 00:43:45.041 "type": "rebuild", 00:43:45.041 "target": "spare", 00:43:45.041 "progress": { 00:43:45.041 "blocks": 69632, 00:43:45.041 "percent": 53 00:43:45.041 } 00:43:45.041 }, 00:43:45.041 "base_bdevs_list": [ 00:43:45.041 { 00:43:45.041 "name": "spare", 00:43:45.041 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:45.041 "is_configured": true, 00:43:45.041 "data_offset": 0, 00:43:45.041 "data_size": 65536 00:43:45.041 }, 00:43:45.041 { 00:43:45.041 "name": "BaseBdev2", 00:43:45.041 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:45.041 "is_configured": true, 00:43:45.041 "data_offset": 0, 00:43:45.041 "data_size": 65536 00:43:45.041 }, 00:43:45.041 { 00:43:45.041 "name": "BaseBdev3", 00:43:45.041 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:45.041 "is_configured": true, 00:43:45.041 "data_offset": 0, 00:43:45.041 "data_size": 65536 00:43:45.041 } 00:43:45.041 ] 00:43:45.041 }' 00:43:45.041 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:45.041 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:45.041 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:45.041 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:45.041 05:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:45.977 "name": "raid_bdev1", 00:43:45.977 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:45.977 "strip_size_kb": 64, 00:43:45.977 "state": "online", 00:43:45.977 "raid_level": "raid5f", 00:43:45.977 "superblock": false, 00:43:45.977 "num_base_bdevs": 3, 00:43:45.977 "num_base_bdevs_discovered": 3, 00:43:45.977 "num_base_bdevs_operational": 3, 00:43:45.977 "process": { 00:43:45.977 "type": "rebuild", 00:43:45.977 "target": "spare", 00:43:45.977 "progress": { 00:43:45.977 "blocks": 94208, 00:43:45.977 "percent": 71 00:43:45.977 } 00:43:45.977 }, 00:43:45.977 "base_bdevs_list": [ 00:43:45.977 { 00:43:45.977 "name": "spare", 00:43:45.977 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:45.977 "is_configured": true, 00:43:45.977 "data_offset": 0, 00:43:45.977 "data_size": 65536 00:43:45.977 }, 00:43:45.977 { 00:43:45.977 "name": "BaseBdev2", 00:43:45.977 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:45.977 "is_configured": true, 00:43:45.977 "data_offset": 0, 00:43:45.977 "data_size": 65536 00:43:45.977 }, 00:43:45.977 { 00:43:45.977 "name": "BaseBdev3", 00:43:45.977 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:45.977 "is_configured": true, 00:43:45.977 "data_offset": 0, 00:43:45.977 "data_size": 65536 00:43:45.977 } 00:43:45.977 ] 00:43:45.977 }' 00:43:45.977 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:46.235 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:46.235 05:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:46.235 05:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:46.235 05:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:47.169 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:47.170 "name": "raid_bdev1", 00:43:47.170 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:47.170 "strip_size_kb": 64, 00:43:47.170 "state": "online", 00:43:47.170 "raid_level": "raid5f", 00:43:47.170 "superblock": false, 00:43:47.170 "num_base_bdevs": 3, 00:43:47.170 "num_base_bdevs_discovered": 3, 00:43:47.170 "num_base_bdevs_operational": 3, 00:43:47.170 "process": { 00:43:47.170 "type": "rebuild", 00:43:47.170 "target": "spare", 00:43:47.170 "progress": { 00:43:47.170 "blocks": 116736, 00:43:47.170 "percent": 89 00:43:47.170 } 00:43:47.170 }, 00:43:47.170 "base_bdevs_list": [ 00:43:47.170 { 00:43:47.170 "name": "spare", 00:43:47.170 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:47.170 "is_configured": true, 00:43:47.170 "data_offset": 0, 00:43:47.170 "data_size": 65536 00:43:47.170 }, 00:43:47.170 { 00:43:47.170 "name": "BaseBdev2", 00:43:47.170 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:47.170 "is_configured": true, 00:43:47.170 "data_offset": 0, 00:43:47.170 "data_size": 65536 00:43:47.170 }, 00:43:47.170 { 00:43:47.170 "name": "BaseBdev3", 00:43:47.170 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:47.170 "is_configured": true, 00:43:47.170 "data_offset": 0, 00:43:47.170 "data_size": 65536 00:43:47.170 } 00:43:47.170 ] 00:43:47.170 }' 00:43:47.170 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:47.428 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:47.428 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:47.428 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:47.428 05:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:47.995 [2024-12-09 05:34:34.689149] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:47.995 [2024-12-09 05:34:34.689246] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:47.995 [2024-12-09 05:34:34.689318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:48.252 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:48.253 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:48.510 "name": "raid_bdev1", 00:43:48.510 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:48.510 "strip_size_kb": 64, 00:43:48.510 "state": "online", 00:43:48.510 "raid_level": "raid5f", 00:43:48.510 "superblock": false, 00:43:48.510 "num_base_bdevs": 3, 00:43:48.510 "num_base_bdevs_discovered": 3, 00:43:48.510 "num_base_bdevs_operational": 3, 00:43:48.510 "base_bdevs_list": [ 00:43:48.510 { 00:43:48.510 "name": "spare", 00:43:48.510 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:48.510 "is_configured": true, 00:43:48.510 "data_offset": 0, 00:43:48.510 "data_size": 65536 00:43:48.510 }, 00:43:48.510 { 00:43:48.510 "name": "BaseBdev2", 00:43:48.510 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:48.510 "is_configured": true, 00:43:48.510 "data_offset": 0, 00:43:48.510 "data_size": 65536 00:43:48.510 }, 00:43:48.510 { 00:43:48.510 "name": "BaseBdev3", 00:43:48.510 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:48.510 "is_configured": true, 00:43:48.510 "data_offset": 0, 00:43:48.510 "data_size": 65536 00:43:48.510 } 00:43:48.510 ] 00:43:48.510 }' 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.510 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:48.510 "name": "raid_bdev1", 00:43:48.510 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:48.510 "strip_size_kb": 64, 00:43:48.510 "state": "online", 00:43:48.510 "raid_level": "raid5f", 00:43:48.510 "superblock": false, 00:43:48.510 "num_base_bdevs": 3, 00:43:48.510 "num_base_bdevs_discovered": 3, 00:43:48.510 "num_base_bdevs_operational": 3, 00:43:48.510 "base_bdevs_list": [ 00:43:48.511 { 00:43:48.511 "name": "spare", 00:43:48.511 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:48.511 "is_configured": true, 00:43:48.511 "data_offset": 0, 00:43:48.511 "data_size": 65536 00:43:48.511 }, 00:43:48.511 { 00:43:48.511 "name": "BaseBdev2", 00:43:48.511 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:48.511 "is_configured": true, 00:43:48.511 "data_offset": 0, 00:43:48.511 "data_size": 65536 00:43:48.511 }, 00:43:48.511 { 00:43:48.511 "name": "BaseBdev3", 00:43:48.511 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:48.511 "is_configured": true, 00:43:48.511 "data_offset": 0, 00:43:48.511 "data_size": 65536 00:43:48.511 } 00:43:48.511 ] 00:43:48.511 }' 00:43:48.511 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:48.511 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:48.511 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:48.769 "name": "raid_bdev1", 00:43:48.769 "uuid": "fff280c6-3551-4923-8de9-01823c4b2c2f", 00:43:48.769 "strip_size_kb": 64, 00:43:48.769 "state": "online", 00:43:48.769 "raid_level": "raid5f", 00:43:48.769 "superblock": false, 00:43:48.769 "num_base_bdevs": 3, 00:43:48.769 "num_base_bdevs_discovered": 3, 00:43:48.769 "num_base_bdevs_operational": 3, 00:43:48.769 "base_bdevs_list": [ 00:43:48.769 { 00:43:48.769 "name": "spare", 00:43:48.769 "uuid": "774591c8-19eb-54ef-aa12-57a4c5e6a2d3", 00:43:48.769 "is_configured": true, 00:43:48.769 "data_offset": 0, 00:43:48.769 "data_size": 65536 00:43:48.769 }, 00:43:48.769 { 00:43:48.769 "name": "BaseBdev2", 00:43:48.769 "uuid": "ffbc1c49-6ce8-523c-bb13-166e0b625f6f", 00:43:48.769 "is_configured": true, 00:43:48.769 "data_offset": 0, 00:43:48.769 "data_size": 65536 00:43:48.769 }, 00:43:48.769 { 00:43:48.769 "name": "BaseBdev3", 00:43:48.769 "uuid": "8df0161e-dc2f-55a1-b075-6e7c9a158f6d", 00:43:48.769 "is_configured": true, 00:43:48.769 "data_offset": 0, 00:43:48.769 "data_size": 65536 00:43:48.769 } 00:43:48.769 ] 00:43:48.769 }' 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:48.769 05:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:49.333 [2024-12-09 05:34:36.045124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:49.333 [2024-12-09 05:34:36.045369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:49.333 [2024-12-09 05:34:36.045526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:49.333 [2024-12-09 05:34:36.045639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:49.333 [2024-12-09 05:34:36.045664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:49.333 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:43:49.590 /dev/nbd0 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:49.590 1+0 records in 00:43:49.590 1+0 records out 00:43:49.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259938 s, 15.8 MB/s 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:49.590 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:43:49.847 /dev/nbd1 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:49.847 1+0 records in 00:43:49.847 1+0 records out 00:43:49.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402598 s, 10.2 MB/s 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:49.847 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:50.105 05:34:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:50.363 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82062 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82062 ']' 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82062 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82062 00:43:50.622 killing process with pid 82062 00:43:50.622 Received shutdown signal, test time was about 60.000000 seconds 00:43:50.622 00:43:50.622 Latency(us) 00:43:50.622 [2024-12-09T05:34:37.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:50.622 [2024-12-09T05:34:37.594Z] =================================================================================================================== 00:43:50.622 [2024-12-09T05:34:37.594Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82062' 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82062 00:43:50.622 [2024-12-09 05:34:37.588206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:50.622 05:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82062 00:43:51.193 [2024-12-09 05:34:37.907696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:43:52.132 00:43:52.132 real 0m16.511s 00:43:52.132 user 0m21.084s 00:43:52.132 sys 0m2.114s 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:52.132 ************************************ 00:43:52.132 END TEST raid5f_rebuild_test 00:43:52.132 ************************************ 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:43:52.132 05:34:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:43:52.132 05:34:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:52.132 05:34:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:52.132 05:34:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:52.132 ************************************ 00:43:52.132 START TEST raid5f_rebuild_test_sb 00:43:52.132 ************************************ 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82513 00:43:52.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82513 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82513 ']' 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:52.132 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:52.133 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:52.133 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:52.133 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:52.390 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:52.390 Zero copy mechanism will not be used. 00:43:52.390 [2024-12-09 05:34:39.189496] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:43:52.390 [2024-12-09 05:34:39.189688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82513 ] 00:43:52.649 [2024-12-09 05:34:39.376968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.649 [2024-12-09 05:34:39.507819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.908 [2024-12-09 05:34:39.704466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:52.908 [2024-12-09 05:34:39.704563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:53.167 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:53.167 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:43:53.167 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:53.167 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:43:53.167 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.167 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.425 BaseBdev1_malloc 00:43:53.425 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 [2024-12-09 05:34:40.162944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:53.426 [2024-12-09 05:34:40.163214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:53.426 [2024-12-09 05:34:40.163258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:53.426 [2024-12-09 05:34:40.163281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:53.426 [2024-12-09 05:34:40.166276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:53.426 [2024-12-09 05:34:40.166490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:53.426 BaseBdev1 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 BaseBdev2_malloc 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 [2024-12-09 05:34:40.216869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:53.426 [2024-12-09 05:34:40.216952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:53.426 [2024-12-09 05:34:40.216985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:53.426 [2024-12-09 05:34:40.217003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:53.426 [2024-12-09 05:34:40.219979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:53.426 [2024-12-09 05:34:40.220025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:53.426 BaseBdev2 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 BaseBdev3_malloc 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 [2024-12-09 05:34:40.270580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:43:53.426 [2024-12-09 05:34:40.270661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:53.426 [2024-12-09 05:34:40.270695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:43:53.426 [2024-12-09 05:34:40.270715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:53.426 [2024-12-09 05:34:40.273594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:53.426 [2024-12-09 05:34:40.273654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:43:53.426 BaseBdev3 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 spare_malloc 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 spare_delay 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 [2024-12-09 05:34:40.330216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:53.426 [2024-12-09 05:34:40.330293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:53.426 [2024-12-09 05:34:40.330318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:43:53.426 [2024-12-09 05:34:40.330335] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:53.426 [2024-12-09 05:34:40.333251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:53.426 [2024-12-09 05:34:40.333312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:53.426 spare 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 [2024-12-09 05:34:40.338311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:53.426 [2024-12-09 05:34:40.340850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:53.426 [2024-12-09 05:34:40.340938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:53.426 [2024-12-09 05:34:40.341195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:53.426 [2024-12-09 05:34:40.341213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:53.426 [2024-12-09 05:34:40.341484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:53.426 [2024-12-09 05:34:40.346349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:53.426 [2024-12-09 05:34:40.346380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:53.426 [2024-12-09 05:34:40.346606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.426 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.685 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:53.685 "name": "raid_bdev1", 00:43:53.685 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:53.685 "strip_size_kb": 64, 00:43:53.685 "state": "online", 00:43:53.685 "raid_level": "raid5f", 00:43:53.685 "superblock": true, 00:43:53.685 "num_base_bdevs": 3, 00:43:53.685 "num_base_bdevs_discovered": 3, 00:43:53.685 "num_base_bdevs_operational": 3, 00:43:53.685 "base_bdevs_list": [ 00:43:53.685 { 00:43:53.685 "name": "BaseBdev1", 00:43:53.685 "uuid": "227334dc-eabd-53a3-9293-1ba82e3e26a5", 00:43:53.685 "is_configured": true, 00:43:53.685 "data_offset": 2048, 00:43:53.685 "data_size": 63488 00:43:53.685 }, 00:43:53.685 { 00:43:53.685 "name": "BaseBdev2", 00:43:53.685 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:53.685 "is_configured": true, 00:43:53.685 "data_offset": 2048, 00:43:53.685 "data_size": 63488 00:43:53.685 }, 00:43:53.685 { 00:43:53.685 "name": "BaseBdev3", 00:43:53.685 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:53.685 "is_configured": true, 00:43:53.685 "data_offset": 2048, 00:43:53.685 "data_size": 63488 00:43:53.685 } 00:43:53.685 ] 00:43:53.685 }' 00:43:53.685 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:53.685 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.947 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:53.947 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:43:53.947 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.947 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:53.947 [2024-12-09 05:34:40.880752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:53.947 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:54.204 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:43:54.462 [2024-12-09 05:34:41.272697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:43:54.462 /dev/nbd0 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:54.462 1+0 records in 00:43:54.462 1+0 records out 00:43:54.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599972 s, 6.8 MB/s 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:43:54.462 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:43:55.063 496+0 records in 00:43:55.063 496+0 records out 00:43:55.063 65011712 bytes (65 MB, 62 MiB) copied, 0.478534 s, 136 MB/s 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:55.063 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:55.323 [2024-12-09 05:34:42.134003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:55.323 [2024-12-09 05:34:42.151874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:55.323 "name": "raid_bdev1", 00:43:55.323 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:55.323 "strip_size_kb": 64, 00:43:55.323 "state": "online", 00:43:55.323 "raid_level": "raid5f", 00:43:55.323 "superblock": true, 00:43:55.323 "num_base_bdevs": 3, 00:43:55.323 "num_base_bdevs_discovered": 2, 00:43:55.323 "num_base_bdevs_operational": 2, 00:43:55.323 "base_bdevs_list": [ 00:43:55.323 { 00:43:55.323 "name": null, 00:43:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:55.323 "is_configured": false, 00:43:55.323 "data_offset": 0, 00:43:55.323 "data_size": 63488 00:43:55.323 }, 00:43:55.323 { 00:43:55.323 "name": "BaseBdev2", 00:43:55.323 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:55.323 "is_configured": true, 00:43:55.323 "data_offset": 2048, 00:43:55.323 "data_size": 63488 00:43:55.323 }, 00:43:55.323 { 00:43:55.323 "name": "BaseBdev3", 00:43:55.323 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:55.323 "is_configured": true, 00:43:55.323 "data_offset": 2048, 00:43:55.323 "data_size": 63488 00:43:55.323 } 00:43:55.323 ] 00:43:55.323 }' 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:55.323 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:55.891 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:55.891 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.891 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:55.891 [2024-12-09 05:34:42.664035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:55.891 [2024-12-09 05:34:42.679644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:43:55.891 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.891 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:43:55.891 [2024-12-09 05:34:42.687238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:56.827 "name": "raid_bdev1", 00:43:56.827 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:56.827 "strip_size_kb": 64, 00:43:56.827 "state": "online", 00:43:56.827 "raid_level": "raid5f", 00:43:56.827 "superblock": true, 00:43:56.827 "num_base_bdevs": 3, 00:43:56.827 "num_base_bdevs_discovered": 3, 00:43:56.827 "num_base_bdevs_operational": 3, 00:43:56.827 "process": { 00:43:56.827 "type": "rebuild", 00:43:56.827 "target": "spare", 00:43:56.827 "progress": { 00:43:56.827 "blocks": 18432, 00:43:56.827 "percent": 14 00:43:56.827 } 00:43:56.827 }, 00:43:56.827 "base_bdevs_list": [ 00:43:56.827 { 00:43:56.827 "name": "spare", 00:43:56.827 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:43:56.827 "is_configured": true, 00:43:56.827 "data_offset": 2048, 00:43:56.827 "data_size": 63488 00:43:56.827 }, 00:43:56.827 { 00:43:56.827 "name": "BaseBdev2", 00:43:56.827 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:56.827 "is_configured": true, 00:43:56.827 "data_offset": 2048, 00:43:56.827 "data_size": 63488 00:43:56.827 }, 00:43:56.827 { 00:43:56.827 "name": "BaseBdev3", 00:43:56.827 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:56.827 "is_configured": true, 00:43:56.827 "data_offset": 2048, 00:43:56.827 "data_size": 63488 00:43:56.827 } 00:43:56.827 ] 00:43:56.827 }' 00:43:56.827 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:57.086 [2024-12-09 05:34:43.852455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:57.086 [2024-12-09 05:34:43.900482] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:57.086 [2024-12-09 05:34:43.900571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:57.086 [2024-12-09 05:34:43.900600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:57.086 [2024-12-09 05:34:43.900611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:57.086 "name": "raid_bdev1", 00:43:57.086 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:57.086 "strip_size_kb": 64, 00:43:57.086 "state": "online", 00:43:57.086 "raid_level": "raid5f", 00:43:57.086 "superblock": true, 00:43:57.086 "num_base_bdevs": 3, 00:43:57.086 "num_base_bdevs_discovered": 2, 00:43:57.086 "num_base_bdevs_operational": 2, 00:43:57.086 "base_bdevs_list": [ 00:43:57.086 { 00:43:57.086 "name": null, 00:43:57.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:57.086 "is_configured": false, 00:43:57.086 "data_offset": 0, 00:43:57.086 "data_size": 63488 00:43:57.086 }, 00:43:57.086 { 00:43:57.086 "name": "BaseBdev2", 00:43:57.086 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:57.086 "is_configured": true, 00:43:57.086 "data_offset": 2048, 00:43:57.086 "data_size": 63488 00:43:57.086 }, 00:43:57.086 { 00:43:57.086 "name": "BaseBdev3", 00:43:57.086 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:57.086 "is_configured": true, 00:43:57.086 "data_offset": 2048, 00:43:57.086 "data_size": 63488 00:43:57.086 } 00:43:57.086 ] 00:43:57.086 }' 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:57.086 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:57.651 "name": "raid_bdev1", 00:43:57.651 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:57.651 "strip_size_kb": 64, 00:43:57.651 "state": "online", 00:43:57.651 "raid_level": "raid5f", 00:43:57.651 "superblock": true, 00:43:57.651 "num_base_bdevs": 3, 00:43:57.651 "num_base_bdevs_discovered": 2, 00:43:57.651 "num_base_bdevs_operational": 2, 00:43:57.651 "base_bdevs_list": [ 00:43:57.651 { 00:43:57.651 "name": null, 00:43:57.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:57.651 "is_configured": false, 00:43:57.651 "data_offset": 0, 00:43:57.651 "data_size": 63488 00:43:57.651 }, 00:43:57.651 { 00:43:57.651 "name": "BaseBdev2", 00:43:57.651 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:57.651 "is_configured": true, 00:43:57.651 "data_offset": 2048, 00:43:57.651 "data_size": 63488 00:43:57.651 }, 00:43:57.651 { 00:43:57.651 "name": "BaseBdev3", 00:43:57.651 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:57.651 "is_configured": true, 00:43:57.651 "data_offset": 2048, 00:43:57.651 "data_size": 63488 00:43:57.651 } 00:43:57.651 ] 00:43:57.651 }' 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.651 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:57.909 [2024-12-09 05:34:44.624794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:57.909 [2024-12-09 05:34:44.639792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:43:57.909 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.909 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:43:57.909 [2024-12-09 05:34:44.647204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:58.839 "name": "raid_bdev1", 00:43:58.839 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:58.839 "strip_size_kb": 64, 00:43:58.839 "state": "online", 00:43:58.839 "raid_level": "raid5f", 00:43:58.839 "superblock": true, 00:43:58.839 "num_base_bdevs": 3, 00:43:58.839 "num_base_bdevs_discovered": 3, 00:43:58.839 "num_base_bdevs_operational": 3, 00:43:58.839 "process": { 00:43:58.839 "type": "rebuild", 00:43:58.839 "target": "spare", 00:43:58.839 "progress": { 00:43:58.839 "blocks": 18432, 00:43:58.839 "percent": 14 00:43:58.839 } 00:43:58.839 }, 00:43:58.839 "base_bdevs_list": [ 00:43:58.839 { 00:43:58.839 "name": "spare", 00:43:58.839 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:43:58.839 "is_configured": true, 00:43:58.839 "data_offset": 2048, 00:43:58.839 "data_size": 63488 00:43:58.839 }, 00:43:58.839 { 00:43:58.839 "name": "BaseBdev2", 00:43:58.839 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:58.839 "is_configured": true, 00:43:58.839 "data_offset": 2048, 00:43:58.839 "data_size": 63488 00:43:58.839 }, 00:43:58.839 { 00:43:58.839 "name": "BaseBdev3", 00:43:58.839 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:58.839 "is_configured": true, 00:43:58.839 "data_offset": 2048, 00:43:58.839 "data_size": 63488 00:43:58.839 } 00:43:58.839 ] 00:43:58.839 }' 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:58.839 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:43:59.096 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:59.096 "name": "raid_bdev1", 00:43:59.096 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:43:59.096 "strip_size_kb": 64, 00:43:59.096 "state": "online", 00:43:59.096 "raid_level": "raid5f", 00:43:59.096 "superblock": true, 00:43:59.096 "num_base_bdevs": 3, 00:43:59.096 "num_base_bdevs_discovered": 3, 00:43:59.096 "num_base_bdevs_operational": 3, 00:43:59.096 "process": { 00:43:59.096 "type": "rebuild", 00:43:59.096 "target": "spare", 00:43:59.096 "progress": { 00:43:59.096 "blocks": 22528, 00:43:59.096 "percent": 17 00:43:59.096 } 00:43:59.096 }, 00:43:59.096 "base_bdevs_list": [ 00:43:59.096 { 00:43:59.096 "name": "spare", 00:43:59.096 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:43:59.096 "is_configured": true, 00:43:59.096 "data_offset": 2048, 00:43:59.096 "data_size": 63488 00:43:59.096 }, 00:43:59.096 { 00:43:59.096 "name": "BaseBdev2", 00:43:59.096 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:43:59.096 "is_configured": true, 00:43:59.096 "data_offset": 2048, 00:43:59.096 "data_size": 63488 00:43:59.096 }, 00:43:59.096 { 00:43:59.096 "name": "BaseBdev3", 00:43:59.096 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:43:59.096 "is_configured": true, 00:43:59.096 "data_offset": 2048, 00:43:59.096 "data_size": 63488 00:43:59.096 } 00:43:59.096 ] 00:43:59.096 }' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:59.096 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:00.029 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.288 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:00.288 "name": "raid_bdev1", 00:44:00.288 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:00.288 "strip_size_kb": 64, 00:44:00.288 "state": "online", 00:44:00.288 "raid_level": "raid5f", 00:44:00.288 "superblock": true, 00:44:00.288 "num_base_bdevs": 3, 00:44:00.288 "num_base_bdevs_discovered": 3, 00:44:00.288 "num_base_bdevs_operational": 3, 00:44:00.288 "process": { 00:44:00.288 "type": "rebuild", 00:44:00.288 "target": "spare", 00:44:00.288 "progress": { 00:44:00.288 "blocks": 47104, 00:44:00.288 "percent": 37 00:44:00.288 } 00:44:00.288 }, 00:44:00.288 "base_bdevs_list": [ 00:44:00.288 { 00:44:00.288 "name": "spare", 00:44:00.288 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:00.288 "is_configured": true, 00:44:00.288 "data_offset": 2048, 00:44:00.288 "data_size": 63488 00:44:00.288 }, 00:44:00.288 { 00:44:00.288 "name": "BaseBdev2", 00:44:00.288 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:00.288 "is_configured": true, 00:44:00.288 "data_offset": 2048, 00:44:00.288 "data_size": 63488 00:44:00.288 }, 00:44:00.288 { 00:44:00.288 "name": "BaseBdev3", 00:44:00.288 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:00.288 "is_configured": true, 00:44:00.288 "data_offset": 2048, 00:44:00.288 "data_size": 63488 00:44:00.288 } 00:44:00.288 ] 00:44:00.288 }' 00:44:00.288 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:00.288 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:00.288 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:00.288 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:00.288 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:01.222 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.479 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:01.479 "name": "raid_bdev1", 00:44:01.479 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:01.479 "strip_size_kb": 64, 00:44:01.479 "state": "online", 00:44:01.479 "raid_level": "raid5f", 00:44:01.479 "superblock": true, 00:44:01.479 "num_base_bdevs": 3, 00:44:01.479 "num_base_bdevs_discovered": 3, 00:44:01.479 "num_base_bdevs_operational": 3, 00:44:01.479 "process": { 00:44:01.479 "type": "rebuild", 00:44:01.480 "target": "spare", 00:44:01.480 "progress": { 00:44:01.480 "blocks": 69632, 00:44:01.480 "percent": 54 00:44:01.480 } 00:44:01.480 }, 00:44:01.480 "base_bdevs_list": [ 00:44:01.480 { 00:44:01.480 "name": "spare", 00:44:01.480 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:01.480 "is_configured": true, 00:44:01.480 "data_offset": 2048, 00:44:01.480 "data_size": 63488 00:44:01.480 }, 00:44:01.480 { 00:44:01.480 "name": "BaseBdev2", 00:44:01.480 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:01.480 "is_configured": true, 00:44:01.480 "data_offset": 2048, 00:44:01.480 "data_size": 63488 00:44:01.480 }, 00:44:01.480 { 00:44:01.480 "name": "BaseBdev3", 00:44:01.480 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:01.480 "is_configured": true, 00:44:01.480 "data_offset": 2048, 00:44:01.480 "data_size": 63488 00:44:01.480 } 00:44:01.480 ] 00:44:01.480 }' 00:44:01.480 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:01.480 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:01.480 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:01.480 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:01.480 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:02.416 "name": "raid_bdev1", 00:44:02.416 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:02.416 "strip_size_kb": 64, 00:44:02.416 "state": "online", 00:44:02.416 "raid_level": "raid5f", 00:44:02.416 "superblock": true, 00:44:02.416 "num_base_bdevs": 3, 00:44:02.416 "num_base_bdevs_discovered": 3, 00:44:02.416 "num_base_bdevs_operational": 3, 00:44:02.416 "process": { 00:44:02.416 "type": "rebuild", 00:44:02.416 "target": "spare", 00:44:02.416 "progress": { 00:44:02.416 "blocks": 94208, 00:44:02.416 "percent": 74 00:44:02.416 } 00:44:02.416 }, 00:44:02.416 "base_bdevs_list": [ 00:44:02.416 { 00:44:02.416 "name": "spare", 00:44:02.416 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:02.416 "is_configured": true, 00:44:02.416 "data_offset": 2048, 00:44:02.416 "data_size": 63488 00:44:02.416 }, 00:44:02.416 { 00:44:02.416 "name": "BaseBdev2", 00:44:02.416 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:02.416 "is_configured": true, 00:44:02.416 "data_offset": 2048, 00:44:02.416 "data_size": 63488 00:44:02.416 }, 00:44:02.416 { 00:44:02.416 "name": "BaseBdev3", 00:44:02.416 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:02.416 "is_configured": true, 00:44:02.416 "data_offset": 2048, 00:44:02.416 "data_size": 63488 00:44:02.416 } 00:44:02.416 ] 00:44:02.416 }' 00:44:02.416 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:02.674 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:02.674 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:02.674 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:02.674 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:03.608 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.609 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:03.609 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.609 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:03.609 "name": "raid_bdev1", 00:44:03.609 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:03.609 "strip_size_kb": 64, 00:44:03.609 "state": "online", 00:44:03.609 "raid_level": "raid5f", 00:44:03.609 "superblock": true, 00:44:03.609 "num_base_bdevs": 3, 00:44:03.609 "num_base_bdevs_discovered": 3, 00:44:03.609 "num_base_bdevs_operational": 3, 00:44:03.609 "process": { 00:44:03.609 "type": "rebuild", 00:44:03.609 "target": "spare", 00:44:03.609 "progress": { 00:44:03.609 "blocks": 116736, 00:44:03.609 "percent": 91 00:44:03.609 } 00:44:03.609 }, 00:44:03.609 "base_bdevs_list": [ 00:44:03.609 { 00:44:03.609 "name": "spare", 00:44:03.609 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:03.609 "is_configured": true, 00:44:03.609 "data_offset": 2048, 00:44:03.609 "data_size": 63488 00:44:03.609 }, 00:44:03.609 { 00:44:03.609 "name": "BaseBdev2", 00:44:03.609 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:03.609 "is_configured": true, 00:44:03.609 "data_offset": 2048, 00:44:03.609 "data_size": 63488 00:44:03.609 }, 00:44:03.609 { 00:44:03.609 "name": "BaseBdev3", 00:44:03.609 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:03.609 "is_configured": true, 00:44:03.609 "data_offset": 2048, 00:44:03.609 "data_size": 63488 00:44:03.609 } 00:44:03.609 ] 00:44:03.609 }' 00:44:03.609 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:03.609 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:03.609 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:03.867 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:03.867 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:04.126 [2024-12-09 05:34:50.922115] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:44:04.126 [2024-12-09 05:34:50.922263] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:44:04.126 [2024-12-09 05:34:50.922486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:04.696 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:04.696 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:04.696 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:04.697 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:04.987 "name": "raid_bdev1", 00:44:04.987 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:04.987 "strip_size_kb": 64, 00:44:04.987 "state": "online", 00:44:04.987 "raid_level": "raid5f", 00:44:04.987 "superblock": true, 00:44:04.987 "num_base_bdevs": 3, 00:44:04.987 "num_base_bdevs_discovered": 3, 00:44:04.987 "num_base_bdevs_operational": 3, 00:44:04.987 "base_bdevs_list": [ 00:44:04.987 { 00:44:04.987 "name": "spare", 00:44:04.987 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:04.987 "is_configured": true, 00:44:04.987 "data_offset": 2048, 00:44:04.987 "data_size": 63488 00:44:04.987 }, 00:44:04.987 { 00:44:04.987 "name": "BaseBdev2", 00:44:04.987 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:04.987 "is_configured": true, 00:44:04.987 "data_offset": 2048, 00:44:04.987 "data_size": 63488 00:44:04.987 }, 00:44:04.987 { 00:44:04.987 "name": "BaseBdev3", 00:44:04.987 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:04.987 "is_configured": true, 00:44:04.987 "data_offset": 2048, 00:44:04.987 "data_size": 63488 00:44:04.987 } 00:44:04.987 ] 00:44:04.987 }' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:04.987 "name": "raid_bdev1", 00:44:04.987 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:04.987 "strip_size_kb": 64, 00:44:04.987 "state": "online", 00:44:04.987 "raid_level": "raid5f", 00:44:04.987 "superblock": true, 00:44:04.987 "num_base_bdevs": 3, 00:44:04.987 "num_base_bdevs_discovered": 3, 00:44:04.987 "num_base_bdevs_operational": 3, 00:44:04.987 "base_bdevs_list": [ 00:44:04.987 { 00:44:04.987 "name": "spare", 00:44:04.987 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:04.987 "is_configured": true, 00:44:04.987 "data_offset": 2048, 00:44:04.987 "data_size": 63488 00:44:04.987 }, 00:44:04.987 { 00:44:04.987 "name": "BaseBdev2", 00:44:04.987 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:04.987 "is_configured": true, 00:44:04.987 "data_offset": 2048, 00:44:04.987 "data_size": 63488 00:44:04.987 }, 00:44:04.987 { 00:44:04.987 "name": "BaseBdev3", 00:44:04.987 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:04.987 "is_configured": true, 00:44:04.987 "data_offset": 2048, 00:44:04.987 "data_size": 63488 00:44:04.987 } 00:44:04.987 ] 00:44:04.987 }' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.987 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:05.246 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.246 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:05.246 "name": "raid_bdev1", 00:44:05.246 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:05.246 "strip_size_kb": 64, 00:44:05.246 "state": "online", 00:44:05.246 "raid_level": "raid5f", 00:44:05.246 "superblock": true, 00:44:05.246 "num_base_bdevs": 3, 00:44:05.246 "num_base_bdevs_discovered": 3, 00:44:05.246 "num_base_bdevs_operational": 3, 00:44:05.246 "base_bdevs_list": [ 00:44:05.246 { 00:44:05.246 "name": "spare", 00:44:05.246 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:05.246 "is_configured": true, 00:44:05.246 "data_offset": 2048, 00:44:05.246 "data_size": 63488 00:44:05.246 }, 00:44:05.246 { 00:44:05.246 "name": "BaseBdev2", 00:44:05.246 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:05.246 "is_configured": true, 00:44:05.246 "data_offset": 2048, 00:44:05.246 "data_size": 63488 00:44:05.246 }, 00:44:05.246 { 00:44:05.246 "name": "BaseBdev3", 00:44:05.246 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:05.246 "is_configured": true, 00:44:05.246 "data_offset": 2048, 00:44:05.246 "data_size": 63488 00:44:05.246 } 00:44:05.246 ] 00:44:05.246 }' 00:44:05.246 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:05.246 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:05.505 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:44:05.505 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.505 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:05.505 [2024-12-09 05:34:52.473461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:05.505 [2024-12-09 05:34:52.473514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:05.505 [2024-12-09 05:34:52.473639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:05.505 [2024-12-09 05:34:52.473765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:05.505 [2024-12-09 05:34:52.473809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:05.764 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:44:06.024 /dev/nbd0 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:06.024 1+0 records in 00:44:06.024 1+0 records out 00:44:06.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473932 s, 8.6 MB/s 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:06.024 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:44:06.283 /dev/nbd1 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:06.283 1+0 records in 00:44:06.283 1+0 records out 00:44:06.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394963 s, 10.4 MB/s 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:06.283 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:44:06.284 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:06.284 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:06.284 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:44:06.284 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:06.284 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:06.284 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:06.542 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:44:06.801 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:06.801 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:06.801 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:06.801 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:06.801 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:06.801 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:06.802 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:44:06.802 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:44:06.802 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:06.802 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.060 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.060 [2024-12-09 05:34:53.970377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:07.060 [2024-12-09 05:34:53.970463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:07.060 [2024-12-09 05:34:53.970494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:44:07.060 [2024-12-09 05:34:53.970537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:07.060 [2024-12-09 05:34:53.973778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:07.060 [2024-12-09 05:34:53.973845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:07.060 [2024-12-09 05:34:53.973979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:44:07.060 [2024-12-09 05:34:53.974051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:07.060 [2024-12-09 05:34:53.974238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:07.061 [2024-12-09 05:34:53.974403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:07.061 spare 00:44:07.061 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.061 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:44:07.061 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.061 05:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.320 [2024-12-09 05:34:54.074603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:44:07.320 [2024-12-09 05:34:54.074672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:44:07.320 [2024-12-09 05:34:54.075173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:44:07.320 [2024-12-09 05:34:54.080328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:44:07.320 [2024-12-09 05:34:54.080354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:44:07.320 [2024-12-09 05:34:54.080654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:07.320 "name": "raid_bdev1", 00:44:07.320 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:07.320 "strip_size_kb": 64, 00:44:07.320 "state": "online", 00:44:07.320 "raid_level": "raid5f", 00:44:07.320 "superblock": true, 00:44:07.320 "num_base_bdevs": 3, 00:44:07.320 "num_base_bdevs_discovered": 3, 00:44:07.320 "num_base_bdevs_operational": 3, 00:44:07.320 "base_bdevs_list": [ 00:44:07.320 { 00:44:07.320 "name": "spare", 00:44:07.320 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:07.320 "is_configured": true, 00:44:07.320 "data_offset": 2048, 00:44:07.320 "data_size": 63488 00:44:07.320 }, 00:44:07.320 { 00:44:07.320 "name": "BaseBdev2", 00:44:07.320 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:07.320 "is_configured": true, 00:44:07.320 "data_offset": 2048, 00:44:07.320 "data_size": 63488 00:44:07.320 }, 00:44:07.320 { 00:44:07.320 "name": "BaseBdev3", 00:44:07.320 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:07.320 "is_configured": true, 00:44:07.320 "data_offset": 2048, 00:44:07.320 "data_size": 63488 00:44:07.320 } 00:44:07.320 ] 00:44:07.320 }' 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:07.320 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:07.888 "name": "raid_bdev1", 00:44:07.888 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:07.888 "strip_size_kb": 64, 00:44:07.888 "state": "online", 00:44:07.888 "raid_level": "raid5f", 00:44:07.888 "superblock": true, 00:44:07.888 "num_base_bdevs": 3, 00:44:07.888 "num_base_bdevs_discovered": 3, 00:44:07.888 "num_base_bdevs_operational": 3, 00:44:07.888 "base_bdevs_list": [ 00:44:07.888 { 00:44:07.888 "name": "spare", 00:44:07.888 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:07.888 "is_configured": true, 00:44:07.888 "data_offset": 2048, 00:44:07.888 "data_size": 63488 00:44:07.888 }, 00:44:07.888 { 00:44:07.888 "name": "BaseBdev2", 00:44:07.888 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:07.888 "is_configured": true, 00:44:07.888 "data_offset": 2048, 00:44:07.888 "data_size": 63488 00:44:07.888 }, 00:44:07.888 { 00:44:07.888 "name": "BaseBdev3", 00:44:07.888 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:07.888 "is_configured": true, 00:44:07.888 "data_offset": 2048, 00:44:07.888 "data_size": 63488 00:44:07.888 } 00:44:07.888 ] 00:44:07.888 }' 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:44:07.888 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:07.889 [2024-12-09 05:34:54.838633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.889 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:08.147 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.147 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:08.147 "name": "raid_bdev1", 00:44:08.147 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:08.147 "strip_size_kb": 64, 00:44:08.147 "state": "online", 00:44:08.147 "raid_level": "raid5f", 00:44:08.147 "superblock": true, 00:44:08.147 "num_base_bdevs": 3, 00:44:08.147 "num_base_bdevs_discovered": 2, 00:44:08.147 "num_base_bdevs_operational": 2, 00:44:08.147 "base_bdevs_list": [ 00:44:08.147 { 00:44:08.147 "name": null, 00:44:08.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:08.148 "is_configured": false, 00:44:08.148 "data_offset": 0, 00:44:08.148 "data_size": 63488 00:44:08.148 }, 00:44:08.148 { 00:44:08.148 "name": "BaseBdev2", 00:44:08.148 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:08.148 "is_configured": true, 00:44:08.148 "data_offset": 2048, 00:44:08.148 "data_size": 63488 00:44:08.148 }, 00:44:08.148 { 00:44:08.148 "name": "BaseBdev3", 00:44:08.148 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:08.148 "is_configured": true, 00:44:08.148 "data_offset": 2048, 00:44:08.148 "data_size": 63488 00:44:08.148 } 00:44:08.148 ] 00:44:08.148 }' 00:44:08.148 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:08.148 05:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:08.712 05:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:44:08.712 05:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.712 05:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:08.712 [2024-12-09 05:34:55.382903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:08.712 [2024-12-09 05:34:55.383366] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:44:08.712 [2024-12-09 05:34:55.383402] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:44:08.712 [2024-12-09 05:34:55.383488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:08.712 [2024-12-09 05:34:55.398493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:44:08.712 05:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.712 05:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:44:08.712 [2024-12-09 05:34:55.406214] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:09.647 "name": "raid_bdev1", 00:44:09.647 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:09.647 "strip_size_kb": 64, 00:44:09.647 "state": "online", 00:44:09.647 "raid_level": "raid5f", 00:44:09.647 "superblock": true, 00:44:09.647 "num_base_bdevs": 3, 00:44:09.647 "num_base_bdevs_discovered": 3, 00:44:09.647 "num_base_bdevs_operational": 3, 00:44:09.647 "process": { 00:44:09.647 "type": "rebuild", 00:44:09.647 "target": "spare", 00:44:09.647 "progress": { 00:44:09.647 "blocks": 18432, 00:44:09.647 "percent": 14 00:44:09.647 } 00:44:09.647 }, 00:44:09.647 "base_bdevs_list": [ 00:44:09.647 { 00:44:09.647 "name": "spare", 00:44:09.647 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:09.647 "is_configured": true, 00:44:09.647 "data_offset": 2048, 00:44:09.647 "data_size": 63488 00:44:09.647 }, 00:44:09.647 { 00:44:09.647 "name": "BaseBdev2", 00:44:09.647 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:09.647 "is_configured": true, 00:44:09.647 "data_offset": 2048, 00:44:09.647 "data_size": 63488 00:44:09.647 }, 00:44:09.647 { 00:44:09.647 "name": "BaseBdev3", 00:44:09.647 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:09.647 "is_configured": true, 00:44:09.647 "data_offset": 2048, 00:44:09.647 "data_size": 63488 00:44:09.647 } 00:44:09.647 ] 00:44:09.647 }' 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.647 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:09.647 [2024-12-09 05:34:56.568840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:09.907 [2024-12-09 05:34:56.620510] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:09.907 [2024-12-09 05:34:56.620648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:09.907 [2024-12-09 05:34:56.620674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:09.907 [2024-12-09 05:34:56.620696] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:09.907 "name": "raid_bdev1", 00:44:09.907 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:09.907 "strip_size_kb": 64, 00:44:09.907 "state": "online", 00:44:09.907 "raid_level": "raid5f", 00:44:09.907 "superblock": true, 00:44:09.907 "num_base_bdevs": 3, 00:44:09.907 "num_base_bdevs_discovered": 2, 00:44:09.907 "num_base_bdevs_operational": 2, 00:44:09.907 "base_bdevs_list": [ 00:44:09.907 { 00:44:09.907 "name": null, 00:44:09.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:09.907 "is_configured": false, 00:44:09.907 "data_offset": 0, 00:44:09.907 "data_size": 63488 00:44:09.907 }, 00:44:09.907 { 00:44:09.907 "name": "BaseBdev2", 00:44:09.907 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:09.907 "is_configured": true, 00:44:09.907 "data_offset": 2048, 00:44:09.907 "data_size": 63488 00:44:09.907 }, 00:44:09.907 { 00:44:09.907 "name": "BaseBdev3", 00:44:09.907 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:09.907 "is_configured": true, 00:44:09.907 "data_offset": 2048, 00:44:09.907 "data_size": 63488 00:44:09.907 } 00:44:09.907 ] 00:44:09.907 }' 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:09.907 05:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:10.473 05:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:44:10.473 05:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.473 05:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:10.473 [2024-12-09 05:34:57.198253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:10.473 [2024-12-09 05:34:57.198356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:10.473 [2024-12-09 05:34:57.198390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:44:10.473 [2024-12-09 05:34:57.198411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:10.473 [2024-12-09 05:34:57.199225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:10.473 [2024-12-09 05:34:57.199364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:10.473 [2024-12-09 05:34:57.199508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:44:10.473 [2024-12-09 05:34:57.199550] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:44:10.473 [2024-12-09 05:34:57.199580] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:44:10.473 [2024-12-09 05:34:57.199630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:10.473 [2024-12-09 05:34:57.215063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:44:10.473 spare 00:44:10.473 05:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.473 05:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:44:10.473 [2024-12-09 05:34:57.222830] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:11.408 "name": "raid_bdev1", 00:44:11.408 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:11.408 "strip_size_kb": 64, 00:44:11.408 "state": "online", 00:44:11.408 "raid_level": "raid5f", 00:44:11.408 "superblock": true, 00:44:11.408 "num_base_bdevs": 3, 00:44:11.408 "num_base_bdevs_discovered": 3, 00:44:11.408 "num_base_bdevs_operational": 3, 00:44:11.408 "process": { 00:44:11.408 "type": "rebuild", 00:44:11.408 "target": "spare", 00:44:11.408 "progress": { 00:44:11.408 "blocks": 18432, 00:44:11.408 "percent": 14 00:44:11.408 } 00:44:11.408 }, 00:44:11.408 "base_bdevs_list": [ 00:44:11.408 { 00:44:11.408 "name": "spare", 00:44:11.408 "uuid": "135ba70e-52c2-5718-ad83-8f380d7ec37f", 00:44:11.408 "is_configured": true, 00:44:11.408 "data_offset": 2048, 00:44:11.408 "data_size": 63488 00:44:11.408 }, 00:44:11.408 { 00:44:11.408 "name": "BaseBdev2", 00:44:11.408 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:11.408 "is_configured": true, 00:44:11.408 "data_offset": 2048, 00:44:11.408 "data_size": 63488 00:44:11.408 }, 00:44:11.408 { 00:44:11.408 "name": "BaseBdev3", 00:44:11.408 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:11.408 "is_configured": true, 00:44:11.408 "data_offset": 2048, 00:44:11.408 "data_size": 63488 00:44:11.408 } 00:44:11.408 ] 00:44:11.408 }' 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:11.408 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:11.666 [2024-12-09 05:34:58.392584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:11.666 [2024-12-09 05:34:58.437922] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:11.666 [2024-12-09 05:34:58.438015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:11.666 [2024-12-09 05:34:58.438044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:11.666 [2024-12-09 05:34:58.438056] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:11.666 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:11.667 "name": "raid_bdev1", 00:44:11.667 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:11.667 "strip_size_kb": 64, 00:44:11.667 "state": "online", 00:44:11.667 "raid_level": "raid5f", 00:44:11.667 "superblock": true, 00:44:11.667 "num_base_bdevs": 3, 00:44:11.667 "num_base_bdevs_discovered": 2, 00:44:11.667 "num_base_bdevs_operational": 2, 00:44:11.667 "base_bdevs_list": [ 00:44:11.667 { 00:44:11.667 "name": null, 00:44:11.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:11.667 "is_configured": false, 00:44:11.667 "data_offset": 0, 00:44:11.667 "data_size": 63488 00:44:11.667 }, 00:44:11.667 { 00:44:11.667 "name": "BaseBdev2", 00:44:11.667 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:11.667 "is_configured": true, 00:44:11.667 "data_offset": 2048, 00:44:11.667 "data_size": 63488 00:44:11.667 }, 00:44:11.667 { 00:44:11.667 "name": "BaseBdev3", 00:44:11.667 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:11.667 "is_configured": true, 00:44:11.667 "data_offset": 2048, 00:44:11.667 "data_size": 63488 00:44:11.667 } 00:44:11.667 ] 00:44:11.667 }' 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:11.667 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:12.233 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:12.233 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:12.233 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:12.234 05:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:12.234 "name": "raid_bdev1", 00:44:12.234 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:12.234 "strip_size_kb": 64, 00:44:12.234 "state": "online", 00:44:12.234 "raid_level": "raid5f", 00:44:12.234 "superblock": true, 00:44:12.234 "num_base_bdevs": 3, 00:44:12.234 "num_base_bdevs_discovered": 2, 00:44:12.234 "num_base_bdevs_operational": 2, 00:44:12.234 "base_bdevs_list": [ 00:44:12.234 { 00:44:12.234 "name": null, 00:44:12.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:12.234 "is_configured": false, 00:44:12.234 "data_offset": 0, 00:44:12.234 "data_size": 63488 00:44:12.234 }, 00:44:12.234 { 00:44:12.234 "name": "BaseBdev2", 00:44:12.234 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:12.234 "is_configured": true, 00:44:12.234 "data_offset": 2048, 00:44:12.234 "data_size": 63488 00:44:12.234 }, 00:44:12.234 { 00:44:12.234 "name": "BaseBdev3", 00:44:12.234 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:12.234 "is_configured": true, 00:44:12.234 "data_offset": 2048, 00:44:12.234 "data_size": 63488 00:44:12.234 } 00:44:12.234 ] 00:44:12.234 }' 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:12.234 [2024-12-09 05:34:59.176809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:44:12.234 [2024-12-09 05:34:59.176894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:12.234 [2024-12-09 05:34:59.176934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:44:12.234 [2024-12-09 05:34:59.176950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:12.234 [2024-12-09 05:34:59.177572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:12.234 [2024-12-09 05:34:59.177615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:44:12.234 [2024-12-09 05:34:59.177744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:44:12.234 [2024-12-09 05:34:59.177766] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:44:12.234 [2024-12-09 05:34:59.177826] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:44:12.234 [2024-12-09 05:34:59.177841] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:44:12.234 BaseBdev1 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.234 05:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:13.611 "name": "raid_bdev1", 00:44:13.611 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:13.611 "strip_size_kb": 64, 00:44:13.611 "state": "online", 00:44:13.611 "raid_level": "raid5f", 00:44:13.611 "superblock": true, 00:44:13.611 "num_base_bdevs": 3, 00:44:13.611 "num_base_bdevs_discovered": 2, 00:44:13.611 "num_base_bdevs_operational": 2, 00:44:13.611 "base_bdevs_list": [ 00:44:13.611 { 00:44:13.611 "name": null, 00:44:13.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:13.611 "is_configured": false, 00:44:13.611 "data_offset": 0, 00:44:13.611 "data_size": 63488 00:44:13.611 }, 00:44:13.611 { 00:44:13.611 "name": "BaseBdev2", 00:44:13.611 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:13.611 "is_configured": true, 00:44:13.611 "data_offset": 2048, 00:44:13.611 "data_size": 63488 00:44:13.611 }, 00:44:13.611 { 00:44:13.611 "name": "BaseBdev3", 00:44:13.611 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:13.611 "is_configured": true, 00:44:13.611 "data_offset": 2048, 00:44:13.611 "data_size": 63488 00:44:13.611 } 00:44:13.611 ] 00:44:13.611 }' 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:13.611 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.871 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:13.871 "name": "raid_bdev1", 00:44:13.871 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:13.871 "strip_size_kb": 64, 00:44:13.871 "state": "online", 00:44:13.871 "raid_level": "raid5f", 00:44:13.871 "superblock": true, 00:44:13.871 "num_base_bdevs": 3, 00:44:13.871 "num_base_bdevs_discovered": 2, 00:44:13.871 "num_base_bdevs_operational": 2, 00:44:13.871 "base_bdevs_list": [ 00:44:13.871 { 00:44:13.871 "name": null, 00:44:13.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:13.871 "is_configured": false, 00:44:13.871 "data_offset": 0, 00:44:13.871 "data_size": 63488 00:44:13.871 }, 00:44:13.871 { 00:44:13.871 "name": "BaseBdev2", 00:44:13.871 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:13.871 "is_configured": true, 00:44:13.872 "data_offset": 2048, 00:44:13.872 "data_size": 63488 00:44:13.872 }, 00:44:13.872 { 00:44:13.872 "name": "BaseBdev3", 00:44:13.872 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:13.872 "is_configured": true, 00:44:13.872 "data_offset": 2048, 00:44:13.872 "data_size": 63488 00:44:13.872 } 00:44:13.872 ] 00:44:13.872 }' 00:44:13.872 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:13.872 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:13.872 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:14.132 [2024-12-09 05:35:00.893503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:14.132 [2024-12-09 05:35:00.893771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:44:14.132 [2024-12-09 05:35:00.893799] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:44:14.132 request: 00:44:14.132 { 00:44:14.132 "base_bdev": "BaseBdev1", 00:44:14.132 "raid_bdev": "raid_bdev1", 00:44:14.132 "method": "bdev_raid_add_base_bdev", 00:44:14.132 "req_id": 1 00:44:14.132 } 00:44:14.132 Got JSON-RPC error response 00:44:14.132 response: 00:44:14.132 { 00:44:14.132 "code": -22, 00:44:14.132 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:44:14.132 } 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:14.132 05:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:15.070 "name": "raid_bdev1", 00:44:15.070 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:15.070 "strip_size_kb": 64, 00:44:15.070 "state": "online", 00:44:15.070 "raid_level": "raid5f", 00:44:15.070 "superblock": true, 00:44:15.070 "num_base_bdevs": 3, 00:44:15.070 "num_base_bdevs_discovered": 2, 00:44:15.070 "num_base_bdevs_operational": 2, 00:44:15.070 "base_bdevs_list": [ 00:44:15.070 { 00:44:15.070 "name": null, 00:44:15.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:15.070 "is_configured": false, 00:44:15.070 "data_offset": 0, 00:44:15.070 "data_size": 63488 00:44:15.070 }, 00:44:15.070 { 00:44:15.070 "name": "BaseBdev2", 00:44:15.070 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:15.070 "is_configured": true, 00:44:15.070 "data_offset": 2048, 00:44:15.070 "data_size": 63488 00:44:15.070 }, 00:44:15.070 { 00:44:15.070 "name": "BaseBdev3", 00:44:15.070 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:15.070 "is_configured": true, 00:44:15.070 "data_offset": 2048, 00:44:15.070 "data_size": 63488 00:44:15.070 } 00:44:15.070 ] 00:44:15.070 }' 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:15.070 05:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:15.639 "name": "raid_bdev1", 00:44:15.639 "uuid": "8e824002-787f-4adb-8f83-694a91ef8297", 00:44:15.639 "strip_size_kb": 64, 00:44:15.639 "state": "online", 00:44:15.639 "raid_level": "raid5f", 00:44:15.639 "superblock": true, 00:44:15.639 "num_base_bdevs": 3, 00:44:15.639 "num_base_bdevs_discovered": 2, 00:44:15.639 "num_base_bdevs_operational": 2, 00:44:15.639 "base_bdevs_list": [ 00:44:15.639 { 00:44:15.639 "name": null, 00:44:15.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:15.639 "is_configured": false, 00:44:15.639 "data_offset": 0, 00:44:15.639 "data_size": 63488 00:44:15.639 }, 00:44:15.639 { 00:44:15.639 "name": "BaseBdev2", 00:44:15.639 "uuid": "01bb00c0-0dc0-5041-ad80-2715552e378c", 00:44:15.639 "is_configured": true, 00:44:15.639 "data_offset": 2048, 00:44:15.639 "data_size": 63488 00:44:15.639 }, 00:44:15.639 { 00:44:15.639 "name": "BaseBdev3", 00:44:15.639 "uuid": "a5ed1700-4722-5d53-91fc-27c7d4867917", 00:44:15.639 "is_configured": true, 00:44:15.639 "data_offset": 2048, 00:44:15.639 "data_size": 63488 00:44:15.639 } 00:44:15.639 ] 00:44:15.639 }' 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82513 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82513 ']' 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82513 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:15.639 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82513 00:44:15.899 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:15.899 killing process with pid 82513 00:44:15.899 Received shutdown signal, test time was about 60.000000 seconds 00:44:15.899 00:44:15.899 Latency(us) 00:44:15.899 [2024-12-09T05:35:02.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.899 [2024-12-09T05:35:02.871Z] =================================================================================================================== 00:44:15.899 [2024-12-09T05:35:02.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:15.899 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:15.899 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82513' 00:44:15.899 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82513 00:44:15.899 [2024-12-09 05:35:02.632918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:15.899 05:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82513 00:44:15.899 [2024-12-09 05:35:02.633099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:15.899 [2024-12-09 05:35:02.633236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:15.899 [2024-12-09 05:35:02.633274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:44:16.158 [2024-12-09 05:35:02.966731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:17.536 05:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:44:17.536 00:44:17.536 real 0m25.012s 00:44:17.536 user 0m33.269s 00:44:17.536 sys 0m2.734s 00:44:17.536 05:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:17.536 05:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:17.536 ************************************ 00:44:17.536 END TEST raid5f_rebuild_test_sb 00:44:17.536 ************************************ 00:44:17.536 05:35:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:44:17.536 05:35:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:44:17.536 05:35:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:44:17.536 05:35:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:17.536 05:35:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:44:17.536 ************************************ 00:44:17.536 START TEST raid5f_state_function_test 00:44:17.536 ************************************ 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83273 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83273' 00:44:17.536 Process raid pid: 83273 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83273 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83273 ']' 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:17.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:17.536 05:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:17.536 [2024-12-09 05:35:04.260490] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:44:17.536 [2024-12-09 05:35:04.260954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:17.536 [2024-12-09 05:35:04.443949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:17.795 [2024-12-09 05:35:04.573018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:18.053 [2024-12-09 05:35:04.777576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:18.053 [2024-12-09 05:35:04.777912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.312 [2024-12-09 05:35:05.217640] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:18.312 [2024-12-09 05:35:05.217737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:18.312 [2024-12-09 05:35:05.217753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:18.312 [2024-12-09 05:35:05.217785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:18.312 [2024-12-09 05:35:05.217809] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:18.312 [2024-12-09 05:35:05.217825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:18.312 [2024-12-09 05:35:05.217835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:18.312 [2024-12-09 05:35:05.217850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.312 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:18.312 "name": "Existed_Raid", 00:44:18.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.312 "strip_size_kb": 64, 00:44:18.312 "state": "configuring", 00:44:18.312 "raid_level": "raid5f", 00:44:18.312 "superblock": false, 00:44:18.312 "num_base_bdevs": 4, 00:44:18.312 "num_base_bdevs_discovered": 0, 00:44:18.312 "num_base_bdevs_operational": 4, 00:44:18.312 "base_bdevs_list": [ 00:44:18.312 { 00:44:18.313 "name": "BaseBdev1", 00:44:18.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.313 "is_configured": false, 00:44:18.313 "data_offset": 0, 00:44:18.313 "data_size": 0 00:44:18.313 }, 00:44:18.313 { 00:44:18.313 "name": "BaseBdev2", 00:44:18.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.313 "is_configured": false, 00:44:18.313 "data_offset": 0, 00:44:18.313 "data_size": 0 00:44:18.313 }, 00:44:18.313 { 00:44:18.313 "name": "BaseBdev3", 00:44:18.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.313 "is_configured": false, 00:44:18.313 "data_offset": 0, 00:44:18.313 "data_size": 0 00:44:18.313 }, 00:44:18.313 { 00:44:18.313 "name": "BaseBdev4", 00:44:18.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.313 "is_configured": false, 00:44:18.313 "data_offset": 0, 00:44:18.313 "data_size": 0 00:44:18.313 } 00:44:18.313 ] 00:44:18.313 }' 00:44:18.313 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:18.313 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.901 [2024-12-09 05:35:05.721697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:18.901 [2024-12-09 05:35:05.721761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.901 [2024-12-09 05:35:05.729666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:18.901 [2024-12-09 05:35:05.729921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:18.901 [2024-12-09 05:35:05.730047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:18.901 [2024-12-09 05:35:05.730208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:18.901 [2024-12-09 05:35:05.730231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:18.901 [2024-12-09 05:35:05.730249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:18.901 [2024-12-09 05:35:05.730259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:18.901 [2024-12-09 05:35:05.730273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.901 [2024-12-09 05:35:05.773616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:18.901 BaseBdev1 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.901 [ 00:44:18.901 { 00:44:18.901 "name": "BaseBdev1", 00:44:18.901 "aliases": [ 00:44:18.901 "0b0a80d6-eda1-438b-9066-66e04d94e6fc" 00:44:18.901 ], 00:44:18.901 "product_name": "Malloc disk", 00:44:18.901 "block_size": 512, 00:44:18.901 "num_blocks": 65536, 00:44:18.901 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:18.901 "assigned_rate_limits": { 00:44:18.901 "rw_ios_per_sec": 0, 00:44:18.901 "rw_mbytes_per_sec": 0, 00:44:18.901 "r_mbytes_per_sec": 0, 00:44:18.901 "w_mbytes_per_sec": 0 00:44:18.901 }, 00:44:18.901 "claimed": true, 00:44:18.901 "claim_type": "exclusive_write", 00:44:18.901 "zoned": false, 00:44:18.901 "supported_io_types": { 00:44:18.901 "read": true, 00:44:18.901 "write": true, 00:44:18.901 "unmap": true, 00:44:18.901 "flush": true, 00:44:18.901 "reset": true, 00:44:18.901 "nvme_admin": false, 00:44:18.901 "nvme_io": false, 00:44:18.901 "nvme_io_md": false, 00:44:18.901 "write_zeroes": true, 00:44:18.901 "zcopy": true, 00:44:18.901 "get_zone_info": false, 00:44:18.901 "zone_management": false, 00:44:18.901 "zone_append": false, 00:44:18.901 "compare": false, 00:44:18.901 "compare_and_write": false, 00:44:18.901 "abort": true, 00:44:18.901 "seek_hole": false, 00:44:18.901 "seek_data": false, 00:44:18.901 "copy": true, 00:44:18.901 "nvme_iov_md": false 00:44:18.901 }, 00:44:18.901 "memory_domains": [ 00:44:18.901 { 00:44:18.901 "dma_device_id": "system", 00:44:18.901 "dma_device_type": 1 00:44:18.901 }, 00:44:18.901 { 00:44:18.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:18.901 "dma_device_type": 2 00:44:18.901 } 00:44:18.901 ], 00:44:18.901 "driver_specific": {} 00:44:18.901 } 00:44:18.901 ] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:18.901 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:18.902 "name": "Existed_Raid", 00:44:18.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.902 "strip_size_kb": 64, 00:44:18.902 "state": "configuring", 00:44:18.902 "raid_level": "raid5f", 00:44:18.902 "superblock": false, 00:44:18.902 "num_base_bdevs": 4, 00:44:18.902 "num_base_bdevs_discovered": 1, 00:44:18.902 "num_base_bdevs_operational": 4, 00:44:18.902 "base_bdevs_list": [ 00:44:18.902 { 00:44:18.902 "name": "BaseBdev1", 00:44:18.902 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:18.902 "is_configured": true, 00:44:18.902 "data_offset": 0, 00:44:18.902 "data_size": 65536 00:44:18.902 }, 00:44:18.902 { 00:44:18.902 "name": "BaseBdev2", 00:44:18.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.902 "is_configured": false, 00:44:18.902 "data_offset": 0, 00:44:18.902 "data_size": 0 00:44:18.902 }, 00:44:18.902 { 00:44:18.902 "name": "BaseBdev3", 00:44:18.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.902 "is_configured": false, 00:44:18.902 "data_offset": 0, 00:44:18.902 "data_size": 0 00:44:18.902 }, 00:44:18.902 { 00:44:18.902 "name": "BaseBdev4", 00:44:18.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:18.902 "is_configured": false, 00:44:18.902 "data_offset": 0, 00:44:18.902 "data_size": 0 00:44:18.902 } 00:44:18.902 ] 00:44:18.902 }' 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:18.902 05:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:19.493 [2024-12-09 05:35:06.313890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:19.493 [2024-12-09 05:35:06.313965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:19.493 [2024-12-09 05:35:06.325923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:19.493 [2024-12-09 05:35:06.328667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:19.493 [2024-12-09 05:35:06.328916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:19.493 [2024-12-09 05:35:06.329039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:19.493 [2024-12-09 05:35:06.329222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:19.493 [2024-12-09 05:35:06.329336] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:19.493 [2024-12-09 05:35:06.329476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:19.493 "name": "Existed_Raid", 00:44:19.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:19.493 "strip_size_kb": 64, 00:44:19.493 "state": "configuring", 00:44:19.493 "raid_level": "raid5f", 00:44:19.493 "superblock": false, 00:44:19.493 "num_base_bdevs": 4, 00:44:19.493 "num_base_bdevs_discovered": 1, 00:44:19.493 "num_base_bdevs_operational": 4, 00:44:19.493 "base_bdevs_list": [ 00:44:19.493 { 00:44:19.493 "name": "BaseBdev1", 00:44:19.493 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:19.493 "is_configured": true, 00:44:19.493 "data_offset": 0, 00:44:19.493 "data_size": 65536 00:44:19.493 }, 00:44:19.493 { 00:44:19.493 "name": "BaseBdev2", 00:44:19.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:19.493 "is_configured": false, 00:44:19.493 "data_offset": 0, 00:44:19.493 "data_size": 0 00:44:19.493 }, 00:44:19.493 { 00:44:19.493 "name": "BaseBdev3", 00:44:19.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:19.493 "is_configured": false, 00:44:19.493 "data_offset": 0, 00:44:19.493 "data_size": 0 00:44:19.493 }, 00:44:19.493 { 00:44:19.493 "name": "BaseBdev4", 00:44:19.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:19.493 "is_configured": false, 00:44:19.493 "data_offset": 0, 00:44:19.493 "data_size": 0 00:44:19.493 } 00:44:19.493 ] 00:44:19.493 }' 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:19.493 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.060 [2024-12-09 05:35:06.904634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:20.060 BaseBdev2 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.060 [ 00:44:20.060 { 00:44:20.060 "name": "BaseBdev2", 00:44:20.060 "aliases": [ 00:44:20.060 "ad27c201-3e3f-47df-97b3-1e9f69ffd88a" 00:44:20.060 ], 00:44:20.060 "product_name": "Malloc disk", 00:44:20.060 "block_size": 512, 00:44:20.060 "num_blocks": 65536, 00:44:20.060 "uuid": "ad27c201-3e3f-47df-97b3-1e9f69ffd88a", 00:44:20.060 "assigned_rate_limits": { 00:44:20.060 "rw_ios_per_sec": 0, 00:44:20.060 "rw_mbytes_per_sec": 0, 00:44:20.060 "r_mbytes_per_sec": 0, 00:44:20.060 "w_mbytes_per_sec": 0 00:44:20.060 }, 00:44:20.060 "claimed": true, 00:44:20.060 "claim_type": "exclusive_write", 00:44:20.060 "zoned": false, 00:44:20.060 "supported_io_types": { 00:44:20.060 "read": true, 00:44:20.060 "write": true, 00:44:20.060 "unmap": true, 00:44:20.060 "flush": true, 00:44:20.060 "reset": true, 00:44:20.060 "nvme_admin": false, 00:44:20.060 "nvme_io": false, 00:44:20.060 "nvme_io_md": false, 00:44:20.060 "write_zeroes": true, 00:44:20.060 "zcopy": true, 00:44:20.060 "get_zone_info": false, 00:44:20.060 "zone_management": false, 00:44:20.060 "zone_append": false, 00:44:20.060 "compare": false, 00:44:20.060 "compare_and_write": false, 00:44:20.060 "abort": true, 00:44:20.060 "seek_hole": false, 00:44:20.060 "seek_data": false, 00:44:20.060 "copy": true, 00:44:20.060 "nvme_iov_md": false 00:44:20.060 }, 00:44:20.060 "memory_domains": [ 00:44:20.060 { 00:44:20.060 "dma_device_id": "system", 00:44:20.060 "dma_device_type": 1 00:44:20.060 }, 00:44:20.060 { 00:44:20.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:20.060 "dma_device_type": 2 00:44:20.060 } 00:44:20.060 ], 00:44:20.060 "driver_specific": {} 00:44:20.060 } 00:44:20.060 ] 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.060 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:20.060 "name": "Existed_Raid", 00:44:20.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:20.060 "strip_size_kb": 64, 00:44:20.060 "state": "configuring", 00:44:20.060 "raid_level": "raid5f", 00:44:20.060 "superblock": false, 00:44:20.060 "num_base_bdevs": 4, 00:44:20.060 "num_base_bdevs_discovered": 2, 00:44:20.060 "num_base_bdevs_operational": 4, 00:44:20.060 "base_bdevs_list": [ 00:44:20.060 { 00:44:20.060 "name": "BaseBdev1", 00:44:20.060 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:20.060 "is_configured": true, 00:44:20.060 "data_offset": 0, 00:44:20.060 "data_size": 65536 00:44:20.060 }, 00:44:20.060 { 00:44:20.060 "name": "BaseBdev2", 00:44:20.060 "uuid": "ad27c201-3e3f-47df-97b3-1e9f69ffd88a", 00:44:20.060 "is_configured": true, 00:44:20.060 "data_offset": 0, 00:44:20.060 "data_size": 65536 00:44:20.060 }, 00:44:20.060 { 00:44:20.060 "name": "BaseBdev3", 00:44:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:20.061 "is_configured": false, 00:44:20.061 "data_offset": 0, 00:44:20.061 "data_size": 0 00:44:20.061 }, 00:44:20.061 { 00:44:20.061 "name": "BaseBdev4", 00:44:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:20.061 "is_configured": false, 00:44:20.061 "data_offset": 0, 00:44:20.061 "data_size": 0 00:44:20.061 } 00:44:20.061 ] 00:44:20.061 }' 00:44:20.061 05:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:20.061 05:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.650 BaseBdev3 00:44:20.650 [2024-12-09 05:35:07.529322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.650 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.650 [ 00:44:20.650 { 00:44:20.650 "name": "BaseBdev3", 00:44:20.650 "aliases": [ 00:44:20.650 "e300dfbc-eb66-4cf8-8e9e-a9089b0cd4fe" 00:44:20.650 ], 00:44:20.650 "product_name": "Malloc disk", 00:44:20.650 "block_size": 512, 00:44:20.650 "num_blocks": 65536, 00:44:20.650 "uuid": "e300dfbc-eb66-4cf8-8e9e-a9089b0cd4fe", 00:44:20.650 "assigned_rate_limits": { 00:44:20.650 "rw_ios_per_sec": 0, 00:44:20.650 "rw_mbytes_per_sec": 0, 00:44:20.650 "r_mbytes_per_sec": 0, 00:44:20.650 "w_mbytes_per_sec": 0 00:44:20.650 }, 00:44:20.650 "claimed": true, 00:44:20.650 "claim_type": "exclusive_write", 00:44:20.650 "zoned": false, 00:44:20.650 "supported_io_types": { 00:44:20.651 "read": true, 00:44:20.651 "write": true, 00:44:20.651 "unmap": true, 00:44:20.651 "flush": true, 00:44:20.651 "reset": true, 00:44:20.651 "nvme_admin": false, 00:44:20.651 "nvme_io": false, 00:44:20.651 "nvme_io_md": false, 00:44:20.651 "write_zeroes": true, 00:44:20.651 "zcopy": true, 00:44:20.651 "get_zone_info": false, 00:44:20.651 "zone_management": false, 00:44:20.651 "zone_append": false, 00:44:20.651 "compare": false, 00:44:20.651 "compare_and_write": false, 00:44:20.651 "abort": true, 00:44:20.651 "seek_hole": false, 00:44:20.651 "seek_data": false, 00:44:20.651 "copy": true, 00:44:20.651 "nvme_iov_md": false 00:44:20.651 }, 00:44:20.651 "memory_domains": [ 00:44:20.651 { 00:44:20.651 "dma_device_id": "system", 00:44:20.651 "dma_device_type": 1 00:44:20.651 }, 00:44:20.651 { 00:44:20.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:20.651 "dma_device_type": 2 00:44:20.651 } 00:44:20.651 ], 00:44:20.651 "driver_specific": {} 00:44:20.651 } 00:44:20.651 ] 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:20.651 "name": "Existed_Raid", 00:44:20.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:20.651 "strip_size_kb": 64, 00:44:20.651 "state": "configuring", 00:44:20.651 "raid_level": "raid5f", 00:44:20.651 "superblock": false, 00:44:20.651 "num_base_bdevs": 4, 00:44:20.651 "num_base_bdevs_discovered": 3, 00:44:20.651 "num_base_bdevs_operational": 4, 00:44:20.651 "base_bdevs_list": [ 00:44:20.651 { 00:44:20.651 "name": "BaseBdev1", 00:44:20.651 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:20.651 "is_configured": true, 00:44:20.651 "data_offset": 0, 00:44:20.651 "data_size": 65536 00:44:20.651 }, 00:44:20.651 { 00:44:20.651 "name": "BaseBdev2", 00:44:20.651 "uuid": "ad27c201-3e3f-47df-97b3-1e9f69ffd88a", 00:44:20.651 "is_configured": true, 00:44:20.651 "data_offset": 0, 00:44:20.651 "data_size": 65536 00:44:20.651 }, 00:44:20.651 { 00:44:20.651 "name": "BaseBdev3", 00:44:20.651 "uuid": "e300dfbc-eb66-4cf8-8e9e-a9089b0cd4fe", 00:44:20.651 "is_configured": true, 00:44:20.651 "data_offset": 0, 00:44:20.651 "data_size": 65536 00:44:20.651 }, 00:44:20.651 { 00:44:20.651 "name": "BaseBdev4", 00:44:20.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:20.651 "is_configured": false, 00:44:20.651 "data_offset": 0, 00:44:20.651 "data_size": 0 00:44:20.651 } 00:44:20.651 ] 00:44:20.651 }' 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:20.651 05:35:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.217 [2024-12-09 05:35:08.125244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:21.217 [2024-12-09 05:35:08.125334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:44:21.217 [2024-12-09 05:35:08.125357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:44:21.217 [2024-12-09 05:35:08.125717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:44:21.217 [2024-12-09 05:35:08.133300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:44:21.217 [2024-12-09 05:35:08.133332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:44:21.217 [2024-12-09 05:35:08.133731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:21.217 BaseBdev4 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.217 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.217 [ 00:44:21.217 { 00:44:21.217 "name": "BaseBdev4", 00:44:21.217 "aliases": [ 00:44:21.217 "f8186e55-c29f-4e3a-8779-a642ccf770ff" 00:44:21.217 ], 00:44:21.217 "product_name": "Malloc disk", 00:44:21.217 "block_size": 512, 00:44:21.217 "num_blocks": 65536, 00:44:21.217 "uuid": "f8186e55-c29f-4e3a-8779-a642ccf770ff", 00:44:21.217 "assigned_rate_limits": { 00:44:21.217 "rw_ios_per_sec": 0, 00:44:21.217 "rw_mbytes_per_sec": 0, 00:44:21.217 "r_mbytes_per_sec": 0, 00:44:21.217 "w_mbytes_per_sec": 0 00:44:21.217 }, 00:44:21.217 "claimed": true, 00:44:21.217 "claim_type": "exclusive_write", 00:44:21.217 "zoned": false, 00:44:21.217 "supported_io_types": { 00:44:21.217 "read": true, 00:44:21.217 "write": true, 00:44:21.217 "unmap": true, 00:44:21.217 "flush": true, 00:44:21.217 "reset": true, 00:44:21.218 "nvme_admin": false, 00:44:21.218 "nvme_io": false, 00:44:21.218 "nvme_io_md": false, 00:44:21.218 "write_zeroes": true, 00:44:21.218 "zcopy": true, 00:44:21.218 "get_zone_info": false, 00:44:21.218 "zone_management": false, 00:44:21.218 "zone_append": false, 00:44:21.218 "compare": false, 00:44:21.218 "compare_and_write": false, 00:44:21.218 "abort": true, 00:44:21.218 "seek_hole": false, 00:44:21.218 "seek_data": false, 00:44:21.218 "copy": true, 00:44:21.218 "nvme_iov_md": false 00:44:21.218 }, 00:44:21.218 "memory_domains": [ 00:44:21.218 { 00:44:21.218 "dma_device_id": "system", 00:44:21.218 "dma_device_type": 1 00:44:21.218 }, 00:44:21.218 { 00:44:21.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:21.218 "dma_device_type": 2 00:44:21.218 } 00:44:21.218 ], 00:44:21.218 "driver_specific": {} 00:44:21.218 } 00:44:21.218 ] 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.218 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.476 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:21.476 "name": "Existed_Raid", 00:44:21.476 "uuid": "a64ba934-e3eb-4d5d-bb2a-ae07bcd7222f", 00:44:21.476 "strip_size_kb": 64, 00:44:21.476 "state": "online", 00:44:21.476 "raid_level": "raid5f", 00:44:21.476 "superblock": false, 00:44:21.476 "num_base_bdevs": 4, 00:44:21.476 "num_base_bdevs_discovered": 4, 00:44:21.476 "num_base_bdevs_operational": 4, 00:44:21.476 "base_bdevs_list": [ 00:44:21.476 { 00:44:21.476 "name": "BaseBdev1", 00:44:21.476 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:21.476 "is_configured": true, 00:44:21.476 "data_offset": 0, 00:44:21.476 "data_size": 65536 00:44:21.476 }, 00:44:21.476 { 00:44:21.476 "name": "BaseBdev2", 00:44:21.476 "uuid": "ad27c201-3e3f-47df-97b3-1e9f69ffd88a", 00:44:21.476 "is_configured": true, 00:44:21.476 "data_offset": 0, 00:44:21.476 "data_size": 65536 00:44:21.476 }, 00:44:21.476 { 00:44:21.476 "name": "BaseBdev3", 00:44:21.476 "uuid": "e300dfbc-eb66-4cf8-8e9e-a9089b0cd4fe", 00:44:21.476 "is_configured": true, 00:44:21.476 "data_offset": 0, 00:44:21.476 "data_size": 65536 00:44:21.476 }, 00:44:21.476 { 00:44:21.476 "name": "BaseBdev4", 00:44:21.476 "uuid": "f8186e55-c29f-4e3a-8779-a642ccf770ff", 00:44:21.476 "is_configured": true, 00:44:21.476 "data_offset": 0, 00:44:21.476 "data_size": 65536 00:44:21.476 } 00:44:21.476 ] 00:44:21.476 }' 00:44:21.476 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:21.476 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.734 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:44:21.734 [2024-12-09 05:35:08.698165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:44:21.993 "name": "Existed_Raid", 00:44:21.993 "aliases": [ 00:44:21.993 "a64ba934-e3eb-4d5d-bb2a-ae07bcd7222f" 00:44:21.993 ], 00:44:21.993 "product_name": "Raid Volume", 00:44:21.993 "block_size": 512, 00:44:21.993 "num_blocks": 196608, 00:44:21.993 "uuid": "a64ba934-e3eb-4d5d-bb2a-ae07bcd7222f", 00:44:21.993 "assigned_rate_limits": { 00:44:21.993 "rw_ios_per_sec": 0, 00:44:21.993 "rw_mbytes_per_sec": 0, 00:44:21.993 "r_mbytes_per_sec": 0, 00:44:21.993 "w_mbytes_per_sec": 0 00:44:21.993 }, 00:44:21.993 "claimed": false, 00:44:21.993 "zoned": false, 00:44:21.993 "supported_io_types": { 00:44:21.993 "read": true, 00:44:21.993 "write": true, 00:44:21.993 "unmap": false, 00:44:21.993 "flush": false, 00:44:21.993 "reset": true, 00:44:21.993 "nvme_admin": false, 00:44:21.993 "nvme_io": false, 00:44:21.993 "nvme_io_md": false, 00:44:21.993 "write_zeroes": true, 00:44:21.993 "zcopy": false, 00:44:21.993 "get_zone_info": false, 00:44:21.993 "zone_management": false, 00:44:21.993 "zone_append": false, 00:44:21.993 "compare": false, 00:44:21.993 "compare_and_write": false, 00:44:21.993 "abort": false, 00:44:21.993 "seek_hole": false, 00:44:21.993 "seek_data": false, 00:44:21.993 "copy": false, 00:44:21.993 "nvme_iov_md": false 00:44:21.993 }, 00:44:21.993 "driver_specific": { 00:44:21.993 "raid": { 00:44:21.993 "uuid": "a64ba934-e3eb-4d5d-bb2a-ae07bcd7222f", 00:44:21.993 "strip_size_kb": 64, 00:44:21.993 "state": "online", 00:44:21.993 "raid_level": "raid5f", 00:44:21.993 "superblock": false, 00:44:21.993 "num_base_bdevs": 4, 00:44:21.993 "num_base_bdevs_discovered": 4, 00:44:21.993 "num_base_bdevs_operational": 4, 00:44:21.993 "base_bdevs_list": [ 00:44:21.993 { 00:44:21.993 "name": "BaseBdev1", 00:44:21.993 "uuid": "0b0a80d6-eda1-438b-9066-66e04d94e6fc", 00:44:21.993 "is_configured": true, 00:44:21.993 "data_offset": 0, 00:44:21.993 "data_size": 65536 00:44:21.993 }, 00:44:21.993 { 00:44:21.993 "name": "BaseBdev2", 00:44:21.993 "uuid": "ad27c201-3e3f-47df-97b3-1e9f69ffd88a", 00:44:21.993 "is_configured": true, 00:44:21.993 "data_offset": 0, 00:44:21.993 "data_size": 65536 00:44:21.993 }, 00:44:21.993 { 00:44:21.993 "name": "BaseBdev3", 00:44:21.993 "uuid": "e300dfbc-eb66-4cf8-8e9e-a9089b0cd4fe", 00:44:21.993 "is_configured": true, 00:44:21.993 "data_offset": 0, 00:44:21.993 "data_size": 65536 00:44:21.993 }, 00:44:21.993 { 00:44:21.993 "name": "BaseBdev4", 00:44:21.993 "uuid": "f8186e55-c29f-4e3a-8779-a642ccf770ff", 00:44:21.993 "is_configured": true, 00:44:21.993 "data_offset": 0, 00:44:21.993 "data_size": 65536 00:44:21.993 } 00:44:21.993 ] 00:44:21.993 } 00:44:21.993 } 00:44:21.993 }' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:44:21.993 BaseBdev2 00:44:21.993 BaseBdev3 00:44:21.993 BaseBdev4' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:21.993 05:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:22.252 05:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.252 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:22.253 [2024-12-09 05:35:09.085988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:22.253 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.512 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:22.512 "name": "Existed_Raid", 00:44:22.512 "uuid": "a64ba934-e3eb-4d5d-bb2a-ae07bcd7222f", 00:44:22.512 "strip_size_kb": 64, 00:44:22.512 "state": "online", 00:44:22.512 "raid_level": "raid5f", 00:44:22.512 "superblock": false, 00:44:22.512 "num_base_bdevs": 4, 00:44:22.512 "num_base_bdevs_discovered": 3, 00:44:22.512 "num_base_bdevs_operational": 3, 00:44:22.512 "base_bdevs_list": [ 00:44:22.512 { 00:44:22.512 "name": null, 00:44:22.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:22.512 "is_configured": false, 00:44:22.512 "data_offset": 0, 00:44:22.512 "data_size": 65536 00:44:22.512 }, 00:44:22.512 { 00:44:22.512 "name": "BaseBdev2", 00:44:22.512 "uuid": "ad27c201-3e3f-47df-97b3-1e9f69ffd88a", 00:44:22.512 "is_configured": true, 00:44:22.512 "data_offset": 0, 00:44:22.512 "data_size": 65536 00:44:22.512 }, 00:44:22.512 { 00:44:22.512 "name": "BaseBdev3", 00:44:22.512 "uuid": "e300dfbc-eb66-4cf8-8e9e-a9089b0cd4fe", 00:44:22.512 "is_configured": true, 00:44:22.512 "data_offset": 0, 00:44:22.512 "data_size": 65536 00:44:22.512 }, 00:44:22.512 { 00:44:22.512 "name": "BaseBdev4", 00:44:22.512 "uuid": "f8186e55-c29f-4e3a-8779-a642ccf770ff", 00:44:22.512 "is_configured": true, 00:44:22.512 "data_offset": 0, 00:44:22.512 "data_size": 65536 00:44:22.512 } 00:44:22.512 ] 00:44:22.512 }' 00:44:22.512 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:22.512 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:22.771 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.031 [2024-12-09 05:35:09.765670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:44:23.031 [2024-12-09 05:35:09.766067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:23.031 [2024-12-09 05:35:09.849705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.031 [2024-12-09 05:35:09.913722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.031 05:35:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.290 [2024-12-09 05:35:10.059266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:44:23.290 [2024-12-09 05:35:10.059525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.290 BaseBdev2 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:23.290 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:23.291 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:23.291 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.291 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.550 [ 00:44:23.550 { 00:44:23.550 "name": "BaseBdev2", 00:44:23.550 "aliases": [ 00:44:23.550 "c167804e-32cc-4208-b2ba-9e4af457c920" 00:44:23.550 ], 00:44:23.550 "product_name": "Malloc disk", 00:44:23.550 "block_size": 512, 00:44:23.550 "num_blocks": 65536, 00:44:23.550 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:23.550 "assigned_rate_limits": { 00:44:23.550 "rw_ios_per_sec": 0, 00:44:23.550 "rw_mbytes_per_sec": 0, 00:44:23.550 "r_mbytes_per_sec": 0, 00:44:23.550 "w_mbytes_per_sec": 0 00:44:23.550 }, 00:44:23.550 "claimed": false, 00:44:23.550 "zoned": false, 00:44:23.550 "supported_io_types": { 00:44:23.550 "read": true, 00:44:23.550 "write": true, 00:44:23.550 "unmap": true, 00:44:23.550 "flush": true, 00:44:23.550 "reset": true, 00:44:23.550 "nvme_admin": false, 00:44:23.550 "nvme_io": false, 00:44:23.550 "nvme_io_md": false, 00:44:23.550 "write_zeroes": true, 00:44:23.550 "zcopy": true, 00:44:23.550 "get_zone_info": false, 00:44:23.550 "zone_management": false, 00:44:23.550 "zone_append": false, 00:44:23.550 "compare": false, 00:44:23.550 "compare_and_write": false, 00:44:23.550 "abort": true, 00:44:23.550 "seek_hole": false, 00:44:23.550 "seek_data": false, 00:44:23.550 "copy": true, 00:44:23.550 "nvme_iov_md": false 00:44:23.550 }, 00:44:23.550 "memory_domains": [ 00:44:23.550 { 00:44:23.550 "dma_device_id": "system", 00:44:23.550 "dma_device_type": 1 00:44:23.550 }, 00:44:23.550 { 00:44:23.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:23.550 "dma_device_type": 2 00:44:23.550 } 00:44:23.550 ], 00:44:23.550 "driver_specific": {} 00:44:23.550 } 00:44:23.550 ] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.550 BaseBdev3 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.550 [ 00:44:23.550 { 00:44:23.550 "name": "BaseBdev3", 00:44:23.550 "aliases": [ 00:44:23.550 "966ede63-ce64-43b0-8d59-b3ddf3c30614" 00:44:23.550 ], 00:44:23.550 "product_name": "Malloc disk", 00:44:23.550 "block_size": 512, 00:44:23.550 "num_blocks": 65536, 00:44:23.550 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:23.550 "assigned_rate_limits": { 00:44:23.550 "rw_ios_per_sec": 0, 00:44:23.550 "rw_mbytes_per_sec": 0, 00:44:23.550 "r_mbytes_per_sec": 0, 00:44:23.550 "w_mbytes_per_sec": 0 00:44:23.550 }, 00:44:23.550 "claimed": false, 00:44:23.550 "zoned": false, 00:44:23.550 "supported_io_types": { 00:44:23.550 "read": true, 00:44:23.550 "write": true, 00:44:23.550 "unmap": true, 00:44:23.550 "flush": true, 00:44:23.550 "reset": true, 00:44:23.550 "nvme_admin": false, 00:44:23.550 "nvme_io": false, 00:44:23.550 "nvme_io_md": false, 00:44:23.550 "write_zeroes": true, 00:44:23.550 "zcopy": true, 00:44:23.550 "get_zone_info": false, 00:44:23.550 "zone_management": false, 00:44:23.550 "zone_append": false, 00:44:23.550 "compare": false, 00:44:23.550 "compare_and_write": false, 00:44:23.550 "abort": true, 00:44:23.550 "seek_hole": false, 00:44:23.550 "seek_data": false, 00:44:23.550 "copy": true, 00:44:23.550 "nvme_iov_md": false 00:44:23.550 }, 00:44:23.550 "memory_domains": [ 00:44:23.550 { 00:44:23.550 "dma_device_id": "system", 00:44:23.550 "dma_device_type": 1 00:44:23.550 }, 00:44:23.550 { 00:44:23.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:23.550 "dma_device_type": 2 00:44:23.550 } 00:44:23.550 ], 00:44:23.550 "driver_specific": {} 00:44:23.550 } 00:44:23.550 ] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:44:23.550 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.551 BaseBdev4 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.551 [ 00:44:23.551 { 00:44:23.551 "name": "BaseBdev4", 00:44:23.551 "aliases": [ 00:44:23.551 "96c36a5b-b596-4966-8277-a6178b4a08fb" 00:44:23.551 ], 00:44:23.551 "product_name": "Malloc disk", 00:44:23.551 "block_size": 512, 00:44:23.551 "num_blocks": 65536, 00:44:23.551 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:23.551 "assigned_rate_limits": { 00:44:23.551 "rw_ios_per_sec": 0, 00:44:23.551 "rw_mbytes_per_sec": 0, 00:44:23.551 "r_mbytes_per_sec": 0, 00:44:23.551 "w_mbytes_per_sec": 0 00:44:23.551 }, 00:44:23.551 "claimed": false, 00:44:23.551 "zoned": false, 00:44:23.551 "supported_io_types": { 00:44:23.551 "read": true, 00:44:23.551 "write": true, 00:44:23.551 "unmap": true, 00:44:23.551 "flush": true, 00:44:23.551 "reset": true, 00:44:23.551 "nvme_admin": false, 00:44:23.551 "nvme_io": false, 00:44:23.551 "nvme_io_md": false, 00:44:23.551 "write_zeroes": true, 00:44:23.551 "zcopy": true, 00:44:23.551 "get_zone_info": false, 00:44:23.551 "zone_management": false, 00:44:23.551 "zone_append": false, 00:44:23.551 "compare": false, 00:44:23.551 "compare_and_write": false, 00:44:23.551 "abort": true, 00:44:23.551 "seek_hole": false, 00:44:23.551 "seek_data": false, 00:44:23.551 "copy": true, 00:44:23.551 "nvme_iov_md": false 00:44:23.551 }, 00:44:23.551 "memory_domains": [ 00:44:23.551 { 00:44:23.551 "dma_device_id": "system", 00:44:23.551 "dma_device_type": 1 00:44:23.551 }, 00:44:23.551 { 00:44:23.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:23.551 "dma_device_type": 2 00:44:23.551 } 00:44:23.551 ], 00:44:23.551 "driver_specific": {} 00:44:23.551 } 00:44:23.551 ] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.551 [2024-12-09 05:35:10.431985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:23.551 [2024-12-09 05:35:10.432193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:23.551 [2024-12-09 05:35:10.432259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:23.551 [2024-12-09 05:35:10.435142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:23.551 [2024-12-09 05:35:10.435401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:23.551 "name": "Existed_Raid", 00:44:23.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:23.551 "strip_size_kb": 64, 00:44:23.551 "state": "configuring", 00:44:23.551 "raid_level": "raid5f", 00:44:23.551 "superblock": false, 00:44:23.551 "num_base_bdevs": 4, 00:44:23.551 "num_base_bdevs_discovered": 3, 00:44:23.551 "num_base_bdevs_operational": 4, 00:44:23.551 "base_bdevs_list": [ 00:44:23.551 { 00:44:23.551 "name": "BaseBdev1", 00:44:23.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:23.551 "is_configured": false, 00:44:23.551 "data_offset": 0, 00:44:23.551 "data_size": 0 00:44:23.551 }, 00:44:23.551 { 00:44:23.551 "name": "BaseBdev2", 00:44:23.551 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:23.551 "is_configured": true, 00:44:23.551 "data_offset": 0, 00:44:23.551 "data_size": 65536 00:44:23.551 }, 00:44:23.551 { 00:44:23.551 "name": "BaseBdev3", 00:44:23.551 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:23.551 "is_configured": true, 00:44:23.551 "data_offset": 0, 00:44:23.551 "data_size": 65536 00:44:23.551 }, 00:44:23.551 { 00:44:23.551 "name": "BaseBdev4", 00:44:23.551 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:23.551 "is_configured": true, 00:44:23.551 "data_offset": 0, 00:44:23.551 "data_size": 65536 00:44:23.551 } 00:44:23.551 ] 00:44:23.551 }' 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:23.551 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.119 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:44:24.119 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.119 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.119 [2024-12-09 05:35:10.980260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:44:24.119 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.119 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.120 05:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:24.120 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.120 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:24.120 "name": "Existed_Raid", 00:44:24.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:24.120 "strip_size_kb": 64, 00:44:24.120 "state": "configuring", 00:44:24.120 "raid_level": "raid5f", 00:44:24.120 "superblock": false, 00:44:24.120 "num_base_bdevs": 4, 00:44:24.120 "num_base_bdevs_discovered": 2, 00:44:24.120 "num_base_bdevs_operational": 4, 00:44:24.120 "base_bdevs_list": [ 00:44:24.120 { 00:44:24.120 "name": "BaseBdev1", 00:44:24.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:24.120 "is_configured": false, 00:44:24.120 "data_offset": 0, 00:44:24.120 "data_size": 0 00:44:24.120 }, 00:44:24.120 { 00:44:24.120 "name": null, 00:44:24.120 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:24.120 "is_configured": false, 00:44:24.120 "data_offset": 0, 00:44:24.120 "data_size": 65536 00:44:24.120 }, 00:44:24.120 { 00:44:24.120 "name": "BaseBdev3", 00:44:24.120 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:24.120 "is_configured": true, 00:44:24.120 "data_offset": 0, 00:44:24.120 "data_size": 65536 00:44:24.120 }, 00:44:24.120 { 00:44:24.120 "name": "BaseBdev4", 00:44:24.120 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:24.120 "is_configured": true, 00:44:24.120 "data_offset": 0, 00:44:24.120 "data_size": 65536 00:44:24.120 } 00:44:24.120 ] 00:44:24.120 }' 00:44:24.120 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:24.120 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.687 [2024-12-09 05:35:11.625375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:24.687 BaseBdev1 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.687 [ 00:44:24.687 { 00:44:24.687 "name": "BaseBdev1", 00:44:24.687 "aliases": [ 00:44:24.687 "04b99ec5-a35e-466a-9ca6-5cab580fc766" 00:44:24.687 ], 00:44:24.687 "product_name": "Malloc disk", 00:44:24.687 "block_size": 512, 00:44:24.687 "num_blocks": 65536, 00:44:24.687 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:24.687 "assigned_rate_limits": { 00:44:24.687 "rw_ios_per_sec": 0, 00:44:24.687 "rw_mbytes_per_sec": 0, 00:44:24.687 "r_mbytes_per_sec": 0, 00:44:24.687 "w_mbytes_per_sec": 0 00:44:24.687 }, 00:44:24.687 "claimed": true, 00:44:24.687 "claim_type": "exclusive_write", 00:44:24.687 "zoned": false, 00:44:24.687 "supported_io_types": { 00:44:24.687 "read": true, 00:44:24.687 "write": true, 00:44:24.687 "unmap": true, 00:44:24.687 "flush": true, 00:44:24.687 "reset": true, 00:44:24.687 "nvme_admin": false, 00:44:24.687 "nvme_io": false, 00:44:24.687 "nvme_io_md": false, 00:44:24.687 "write_zeroes": true, 00:44:24.687 "zcopy": true, 00:44:24.687 "get_zone_info": false, 00:44:24.687 "zone_management": false, 00:44:24.687 "zone_append": false, 00:44:24.687 "compare": false, 00:44:24.687 "compare_and_write": false, 00:44:24.687 "abort": true, 00:44:24.687 "seek_hole": false, 00:44:24.687 "seek_data": false, 00:44:24.687 "copy": true, 00:44:24.687 "nvme_iov_md": false 00:44:24.687 }, 00:44:24.687 "memory_domains": [ 00:44:24.687 { 00:44:24.687 "dma_device_id": "system", 00:44:24.687 "dma_device_type": 1 00:44:24.687 }, 00:44:24.687 { 00:44:24.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:24.687 "dma_device_type": 2 00:44:24.687 } 00:44:24.687 ], 00:44:24.687 "driver_specific": {} 00:44:24.687 } 00:44:24.687 ] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:24.687 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:24.946 "name": "Existed_Raid", 00:44:24.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:24.946 "strip_size_kb": 64, 00:44:24.946 "state": "configuring", 00:44:24.946 "raid_level": "raid5f", 00:44:24.946 "superblock": false, 00:44:24.946 "num_base_bdevs": 4, 00:44:24.946 "num_base_bdevs_discovered": 3, 00:44:24.946 "num_base_bdevs_operational": 4, 00:44:24.946 "base_bdevs_list": [ 00:44:24.946 { 00:44:24.946 "name": "BaseBdev1", 00:44:24.946 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:24.946 "is_configured": true, 00:44:24.946 "data_offset": 0, 00:44:24.946 "data_size": 65536 00:44:24.946 }, 00:44:24.946 { 00:44:24.946 "name": null, 00:44:24.946 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:24.946 "is_configured": false, 00:44:24.946 "data_offset": 0, 00:44:24.946 "data_size": 65536 00:44:24.946 }, 00:44:24.946 { 00:44:24.946 "name": "BaseBdev3", 00:44:24.946 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:24.946 "is_configured": true, 00:44:24.946 "data_offset": 0, 00:44:24.946 "data_size": 65536 00:44:24.946 }, 00:44:24.946 { 00:44:24.946 "name": "BaseBdev4", 00:44:24.946 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:24.946 "is_configured": true, 00:44:24.946 "data_offset": 0, 00:44:24.946 "data_size": 65536 00:44:24.946 } 00:44:24.946 ] 00:44:24.946 }' 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:24.946 05:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:25.514 [2024-12-09 05:35:12.241703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:25.514 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:25.515 "name": "Existed_Raid", 00:44:25.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:25.515 "strip_size_kb": 64, 00:44:25.515 "state": "configuring", 00:44:25.515 "raid_level": "raid5f", 00:44:25.515 "superblock": false, 00:44:25.515 "num_base_bdevs": 4, 00:44:25.515 "num_base_bdevs_discovered": 2, 00:44:25.515 "num_base_bdevs_operational": 4, 00:44:25.515 "base_bdevs_list": [ 00:44:25.515 { 00:44:25.515 "name": "BaseBdev1", 00:44:25.515 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:25.515 "is_configured": true, 00:44:25.515 "data_offset": 0, 00:44:25.515 "data_size": 65536 00:44:25.515 }, 00:44:25.515 { 00:44:25.515 "name": null, 00:44:25.515 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:25.515 "is_configured": false, 00:44:25.515 "data_offset": 0, 00:44:25.515 "data_size": 65536 00:44:25.515 }, 00:44:25.515 { 00:44:25.515 "name": null, 00:44:25.515 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:25.515 "is_configured": false, 00:44:25.515 "data_offset": 0, 00:44:25.515 "data_size": 65536 00:44:25.515 }, 00:44:25.515 { 00:44:25.515 "name": "BaseBdev4", 00:44:25.515 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:25.515 "is_configured": true, 00:44:25.515 "data_offset": 0, 00:44:25.515 "data_size": 65536 00:44:25.515 } 00:44:25.515 ] 00:44:25.515 }' 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:25.515 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.083 [2024-12-09 05:35:12.845822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:26.083 "name": "Existed_Raid", 00:44:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:26.083 "strip_size_kb": 64, 00:44:26.083 "state": "configuring", 00:44:26.083 "raid_level": "raid5f", 00:44:26.083 "superblock": false, 00:44:26.083 "num_base_bdevs": 4, 00:44:26.083 "num_base_bdevs_discovered": 3, 00:44:26.083 "num_base_bdevs_operational": 4, 00:44:26.083 "base_bdevs_list": [ 00:44:26.083 { 00:44:26.083 "name": "BaseBdev1", 00:44:26.083 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:26.083 "is_configured": true, 00:44:26.083 "data_offset": 0, 00:44:26.083 "data_size": 65536 00:44:26.083 }, 00:44:26.083 { 00:44:26.083 "name": null, 00:44:26.083 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:26.083 "is_configured": false, 00:44:26.083 "data_offset": 0, 00:44:26.083 "data_size": 65536 00:44:26.083 }, 00:44:26.083 { 00:44:26.083 "name": "BaseBdev3", 00:44:26.083 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:26.083 "is_configured": true, 00:44:26.083 "data_offset": 0, 00:44:26.083 "data_size": 65536 00:44:26.083 }, 00:44:26.083 { 00:44:26.083 "name": "BaseBdev4", 00:44:26.083 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:26.083 "is_configured": true, 00:44:26.083 "data_offset": 0, 00:44:26.083 "data_size": 65536 00:44:26.083 } 00:44:26.083 ] 00:44:26.083 }' 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:26.083 05:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.652 [2024-12-09 05:35:13.426192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:26.652 "name": "Existed_Raid", 00:44:26.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:26.652 "strip_size_kb": 64, 00:44:26.652 "state": "configuring", 00:44:26.652 "raid_level": "raid5f", 00:44:26.652 "superblock": false, 00:44:26.652 "num_base_bdevs": 4, 00:44:26.652 "num_base_bdevs_discovered": 2, 00:44:26.652 "num_base_bdevs_operational": 4, 00:44:26.652 "base_bdevs_list": [ 00:44:26.652 { 00:44:26.652 "name": null, 00:44:26.652 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:26.652 "is_configured": false, 00:44:26.652 "data_offset": 0, 00:44:26.652 "data_size": 65536 00:44:26.652 }, 00:44:26.652 { 00:44:26.652 "name": null, 00:44:26.652 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:26.652 "is_configured": false, 00:44:26.652 "data_offset": 0, 00:44:26.652 "data_size": 65536 00:44:26.652 }, 00:44:26.652 { 00:44:26.652 "name": "BaseBdev3", 00:44:26.652 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:26.652 "is_configured": true, 00:44:26.652 "data_offset": 0, 00:44:26.652 "data_size": 65536 00:44:26.652 }, 00:44:26.652 { 00:44:26.652 "name": "BaseBdev4", 00:44:26.652 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:26.652 "is_configured": true, 00:44:26.652 "data_offset": 0, 00:44:26.652 "data_size": 65536 00:44:26.652 } 00:44:26.652 ] 00:44:26.652 }' 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:26.652 05:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.220 [2024-12-09 05:35:14.087209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:27.220 "name": "Existed_Raid", 00:44:27.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:27.220 "strip_size_kb": 64, 00:44:27.220 "state": "configuring", 00:44:27.220 "raid_level": "raid5f", 00:44:27.220 "superblock": false, 00:44:27.220 "num_base_bdevs": 4, 00:44:27.220 "num_base_bdevs_discovered": 3, 00:44:27.220 "num_base_bdevs_operational": 4, 00:44:27.220 "base_bdevs_list": [ 00:44:27.220 { 00:44:27.220 "name": null, 00:44:27.220 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:27.220 "is_configured": false, 00:44:27.220 "data_offset": 0, 00:44:27.220 "data_size": 65536 00:44:27.220 }, 00:44:27.220 { 00:44:27.220 "name": "BaseBdev2", 00:44:27.220 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:27.220 "is_configured": true, 00:44:27.220 "data_offset": 0, 00:44:27.220 "data_size": 65536 00:44:27.220 }, 00:44:27.220 { 00:44:27.220 "name": "BaseBdev3", 00:44:27.220 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:27.220 "is_configured": true, 00:44:27.220 "data_offset": 0, 00:44:27.220 "data_size": 65536 00:44:27.220 }, 00:44:27.220 { 00:44:27.220 "name": "BaseBdev4", 00:44:27.220 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:27.220 "is_configured": true, 00:44:27.220 "data_offset": 0, 00:44:27.220 "data_size": 65536 00:44:27.220 } 00:44:27.220 ] 00:44:27.220 }' 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:27.220 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04b99ec5-a35e-466a-9ca6-5cab580fc766 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.789 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:27.789 [2024-12-09 05:35:14.760050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:44:27.789 [2024-12-09 05:35:14.760394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:44:27.789 [2024-12-09 05:35:14.760425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:44:27.789 [2024-12-09 05:35:14.760912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:44:28.048 NewBaseBdev 00:44:28.048 [2024-12-09 05:35:14.767849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:44:28.048 [2024-12-09 05:35:14.767884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:44:28.048 [2024-12-09 05:35:14.768283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.048 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.048 [ 00:44:28.048 { 00:44:28.048 "name": "NewBaseBdev", 00:44:28.048 "aliases": [ 00:44:28.048 "04b99ec5-a35e-466a-9ca6-5cab580fc766" 00:44:28.048 ], 00:44:28.048 "product_name": "Malloc disk", 00:44:28.049 "block_size": 512, 00:44:28.049 "num_blocks": 65536, 00:44:28.049 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:28.049 "assigned_rate_limits": { 00:44:28.049 "rw_ios_per_sec": 0, 00:44:28.049 "rw_mbytes_per_sec": 0, 00:44:28.049 "r_mbytes_per_sec": 0, 00:44:28.049 "w_mbytes_per_sec": 0 00:44:28.049 }, 00:44:28.049 "claimed": true, 00:44:28.049 "claim_type": "exclusive_write", 00:44:28.049 "zoned": false, 00:44:28.049 "supported_io_types": { 00:44:28.049 "read": true, 00:44:28.049 "write": true, 00:44:28.049 "unmap": true, 00:44:28.049 "flush": true, 00:44:28.049 "reset": true, 00:44:28.049 "nvme_admin": false, 00:44:28.049 "nvme_io": false, 00:44:28.049 "nvme_io_md": false, 00:44:28.049 "write_zeroes": true, 00:44:28.049 "zcopy": true, 00:44:28.049 "get_zone_info": false, 00:44:28.049 "zone_management": false, 00:44:28.049 "zone_append": false, 00:44:28.049 "compare": false, 00:44:28.049 "compare_and_write": false, 00:44:28.049 "abort": true, 00:44:28.049 "seek_hole": false, 00:44:28.049 "seek_data": false, 00:44:28.049 "copy": true, 00:44:28.049 "nvme_iov_md": false 00:44:28.049 }, 00:44:28.049 "memory_domains": [ 00:44:28.049 { 00:44:28.049 "dma_device_id": "system", 00:44:28.049 "dma_device_type": 1 00:44:28.049 }, 00:44:28.049 { 00:44:28.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:28.049 "dma_device_type": 2 00:44:28.049 } 00:44:28.049 ], 00:44:28.049 "driver_specific": {} 00:44:28.049 } 00:44:28.049 ] 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:28.049 "name": "Existed_Raid", 00:44:28.049 "uuid": "7e3a0413-65cd-49f0-96b0-a9db473fdf5b", 00:44:28.049 "strip_size_kb": 64, 00:44:28.049 "state": "online", 00:44:28.049 "raid_level": "raid5f", 00:44:28.049 "superblock": false, 00:44:28.049 "num_base_bdevs": 4, 00:44:28.049 "num_base_bdevs_discovered": 4, 00:44:28.049 "num_base_bdevs_operational": 4, 00:44:28.049 "base_bdevs_list": [ 00:44:28.049 { 00:44:28.049 "name": "NewBaseBdev", 00:44:28.049 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:28.049 "is_configured": true, 00:44:28.049 "data_offset": 0, 00:44:28.049 "data_size": 65536 00:44:28.049 }, 00:44:28.049 { 00:44:28.049 "name": "BaseBdev2", 00:44:28.049 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:28.049 "is_configured": true, 00:44:28.049 "data_offset": 0, 00:44:28.049 "data_size": 65536 00:44:28.049 }, 00:44:28.049 { 00:44:28.049 "name": "BaseBdev3", 00:44:28.049 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:28.049 "is_configured": true, 00:44:28.049 "data_offset": 0, 00:44:28.049 "data_size": 65536 00:44:28.049 }, 00:44:28.049 { 00:44:28.049 "name": "BaseBdev4", 00:44:28.049 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:28.049 "is_configured": true, 00:44:28.049 "data_offset": 0, 00:44:28.049 "data_size": 65536 00:44:28.049 } 00:44:28.049 ] 00:44:28.049 }' 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:28.049 05:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.625 [2024-12-09 05:35:15.392592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:44:28.625 "name": "Existed_Raid", 00:44:28.625 "aliases": [ 00:44:28.625 "7e3a0413-65cd-49f0-96b0-a9db473fdf5b" 00:44:28.625 ], 00:44:28.625 "product_name": "Raid Volume", 00:44:28.625 "block_size": 512, 00:44:28.625 "num_blocks": 196608, 00:44:28.625 "uuid": "7e3a0413-65cd-49f0-96b0-a9db473fdf5b", 00:44:28.625 "assigned_rate_limits": { 00:44:28.625 "rw_ios_per_sec": 0, 00:44:28.625 "rw_mbytes_per_sec": 0, 00:44:28.625 "r_mbytes_per_sec": 0, 00:44:28.625 "w_mbytes_per_sec": 0 00:44:28.625 }, 00:44:28.625 "claimed": false, 00:44:28.625 "zoned": false, 00:44:28.625 "supported_io_types": { 00:44:28.625 "read": true, 00:44:28.625 "write": true, 00:44:28.625 "unmap": false, 00:44:28.625 "flush": false, 00:44:28.625 "reset": true, 00:44:28.625 "nvme_admin": false, 00:44:28.625 "nvme_io": false, 00:44:28.625 "nvme_io_md": false, 00:44:28.625 "write_zeroes": true, 00:44:28.625 "zcopy": false, 00:44:28.625 "get_zone_info": false, 00:44:28.625 "zone_management": false, 00:44:28.625 "zone_append": false, 00:44:28.625 "compare": false, 00:44:28.625 "compare_and_write": false, 00:44:28.625 "abort": false, 00:44:28.625 "seek_hole": false, 00:44:28.625 "seek_data": false, 00:44:28.625 "copy": false, 00:44:28.625 "nvme_iov_md": false 00:44:28.625 }, 00:44:28.625 "driver_specific": { 00:44:28.625 "raid": { 00:44:28.625 "uuid": "7e3a0413-65cd-49f0-96b0-a9db473fdf5b", 00:44:28.625 "strip_size_kb": 64, 00:44:28.625 "state": "online", 00:44:28.625 "raid_level": "raid5f", 00:44:28.625 "superblock": false, 00:44:28.625 "num_base_bdevs": 4, 00:44:28.625 "num_base_bdevs_discovered": 4, 00:44:28.625 "num_base_bdevs_operational": 4, 00:44:28.625 "base_bdevs_list": [ 00:44:28.625 { 00:44:28.625 "name": "NewBaseBdev", 00:44:28.625 "uuid": "04b99ec5-a35e-466a-9ca6-5cab580fc766", 00:44:28.625 "is_configured": true, 00:44:28.625 "data_offset": 0, 00:44:28.625 "data_size": 65536 00:44:28.625 }, 00:44:28.625 { 00:44:28.625 "name": "BaseBdev2", 00:44:28.625 "uuid": "c167804e-32cc-4208-b2ba-9e4af457c920", 00:44:28.625 "is_configured": true, 00:44:28.625 "data_offset": 0, 00:44:28.625 "data_size": 65536 00:44:28.625 }, 00:44:28.625 { 00:44:28.625 "name": "BaseBdev3", 00:44:28.625 "uuid": "966ede63-ce64-43b0-8d59-b3ddf3c30614", 00:44:28.625 "is_configured": true, 00:44:28.625 "data_offset": 0, 00:44:28.625 "data_size": 65536 00:44:28.625 }, 00:44:28.625 { 00:44:28.625 "name": "BaseBdev4", 00:44:28.625 "uuid": "96c36a5b-b596-4966-8277-a6178b4a08fb", 00:44:28.625 "is_configured": true, 00:44:28.625 "data_offset": 0, 00:44:28.625 "data_size": 65536 00:44:28.625 } 00:44:28.625 ] 00:44:28.625 } 00:44:28.625 } 00:44:28.625 }' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:44:28.625 BaseBdev2 00:44:28.625 BaseBdev3 00:44:28.625 BaseBdev4' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.625 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.626 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:28.884 [2024-12-09 05:35:15.788380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:28.884 [2024-12-09 05:35:15.788554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:28.884 [2024-12-09 05:35:15.788953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:28.884 [2024-12-09 05:35:15.789654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:28.884 [2024-12-09 05:35:15.789682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83273 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83273 ']' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83273 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83273 00:44:28.884 killing process with pid 83273 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83273' 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83273 00:44:28.884 [2024-12-09 05:35:15.829928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:28.884 05:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83273 00:44:29.450 [2024-12-09 05:35:16.149577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:30.383 05:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:44:30.383 00:44:30.383 real 0m13.086s 00:44:30.383 user 0m21.694s 00:44:30.383 sys 0m1.851s 00:44:30.383 05:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:30.383 ************************************ 00:44:30.383 END TEST raid5f_state_function_test 00:44:30.383 ************************************ 00:44:30.383 05:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:44:30.383 05:35:17 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:44:30.383 05:35:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:44:30.383 05:35:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:30.383 05:35:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:44:30.383 ************************************ 00:44:30.383 START TEST raid5f_state_function_test_sb 00:44:30.383 ************************************ 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83957 00:44:30.384 Process raid pid: 83957 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83957' 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83957 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83957 ']' 00:44:30.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:30.384 05:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:30.688 [2024-12-09 05:35:17.430997] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:44:30.688 [2024-12-09 05:35:17.431555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:30.688 [2024-12-09 05:35:17.617971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.950 [2024-12-09 05:35:17.757424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.207 [2024-12-09 05:35:17.963955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:31.207 [2024-12-09 05:35:17.964015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:31.517 [2024-12-09 05:35:18.430665] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:31.517 [2024-12-09 05:35:18.430908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:31.517 [2024-12-09 05:35:18.431069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:31.517 [2024-12-09 05:35:18.431138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:31.517 [2024-12-09 05:35:18.431378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:31.517 [2024-12-09 05:35:18.431453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:31.517 [2024-12-09 05:35:18.431661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:31.517 [2024-12-09 05:35:18.431734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:31.517 "name": "Existed_Raid", 00:44:31.517 "uuid": "8df30e78-8e2c-4d81-8e19-259e18100558", 00:44:31.517 "strip_size_kb": 64, 00:44:31.517 "state": "configuring", 00:44:31.517 "raid_level": "raid5f", 00:44:31.517 "superblock": true, 00:44:31.517 "num_base_bdevs": 4, 00:44:31.517 "num_base_bdevs_discovered": 0, 00:44:31.517 "num_base_bdevs_operational": 4, 00:44:31.517 "base_bdevs_list": [ 00:44:31.517 { 00:44:31.517 "name": "BaseBdev1", 00:44:31.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:31.517 "is_configured": false, 00:44:31.517 "data_offset": 0, 00:44:31.517 "data_size": 0 00:44:31.517 }, 00:44:31.517 { 00:44:31.517 "name": "BaseBdev2", 00:44:31.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:31.517 "is_configured": false, 00:44:31.517 "data_offset": 0, 00:44:31.517 "data_size": 0 00:44:31.517 }, 00:44:31.517 { 00:44:31.517 "name": "BaseBdev3", 00:44:31.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:31.517 "is_configured": false, 00:44:31.517 "data_offset": 0, 00:44:31.517 "data_size": 0 00:44:31.517 }, 00:44:31.517 { 00:44:31.517 "name": "BaseBdev4", 00:44:31.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:31.517 "is_configured": false, 00:44:31.517 "data_offset": 0, 00:44:31.517 "data_size": 0 00:44:31.517 } 00:44:31.517 ] 00:44:31.517 }' 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:31.517 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.083 [2024-12-09 05:35:18.974792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:32.083 [2024-12-09 05:35:18.975002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.083 [2024-12-09 05:35:18.982791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:32.083 [2024-12-09 05:35:18.982976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:32.083 [2024-12-09 05:35:18.983135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:32.083 [2024-12-09 05:35:18.983270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:32.083 [2024-12-09 05:35:18.983384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:32.083 [2024-12-09 05:35:18.983454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:32.083 [2024-12-09 05:35:18.983635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:32.083 [2024-12-09 05:35:18.983708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.083 05:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.083 [2024-12-09 05:35:19.026745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:32.083 BaseBdev1 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.083 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.083 [ 00:44:32.083 { 00:44:32.083 "name": "BaseBdev1", 00:44:32.083 "aliases": [ 00:44:32.083 "c7c710e1-31ec-4b7c-8b9a-6216498b45fb" 00:44:32.083 ], 00:44:32.083 "product_name": "Malloc disk", 00:44:32.083 "block_size": 512, 00:44:32.083 "num_blocks": 65536, 00:44:32.083 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:32.083 "assigned_rate_limits": { 00:44:32.083 "rw_ios_per_sec": 0, 00:44:32.083 "rw_mbytes_per_sec": 0, 00:44:32.083 "r_mbytes_per_sec": 0, 00:44:32.083 "w_mbytes_per_sec": 0 00:44:32.083 }, 00:44:32.083 "claimed": true, 00:44:32.083 "claim_type": "exclusive_write", 00:44:32.083 "zoned": false, 00:44:32.083 "supported_io_types": { 00:44:32.083 "read": true, 00:44:32.083 "write": true, 00:44:32.083 "unmap": true, 00:44:32.083 "flush": true, 00:44:32.083 "reset": true, 00:44:32.083 "nvme_admin": false, 00:44:32.083 "nvme_io": false, 00:44:32.341 "nvme_io_md": false, 00:44:32.341 "write_zeroes": true, 00:44:32.341 "zcopy": true, 00:44:32.341 "get_zone_info": false, 00:44:32.341 "zone_management": false, 00:44:32.341 "zone_append": false, 00:44:32.341 "compare": false, 00:44:32.341 "compare_and_write": false, 00:44:32.341 "abort": true, 00:44:32.341 "seek_hole": false, 00:44:32.341 "seek_data": false, 00:44:32.341 "copy": true, 00:44:32.341 "nvme_iov_md": false 00:44:32.341 }, 00:44:32.341 "memory_domains": [ 00:44:32.341 { 00:44:32.341 "dma_device_id": "system", 00:44:32.341 "dma_device_type": 1 00:44:32.341 }, 00:44:32.341 { 00:44:32.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:32.341 "dma_device_type": 2 00:44:32.341 } 00:44:32.341 ], 00:44:32.341 "driver_specific": {} 00:44:32.341 } 00:44:32.341 ] 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.341 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:32.341 "name": "Existed_Raid", 00:44:32.341 "uuid": "d871ed53-0ac2-487e-b698-9844eed9ce20", 00:44:32.341 "strip_size_kb": 64, 00:44:32.341 "state": "configuring", 00:44:32.341 "raid_level": "raid5f", 00:44:32.341 "superblock": true, 00:44:32.341 "num_base_bdevs": 4, 00:44:32.341 "num_base_bdevs_discovered": 1, 00:44:32.341 "num_base_bdevs_operational": 4, 00:44:32.341 "base_bdevs_list": [ 00:44:32.341 { 00:44:32.341 "name": "BaseBdev1", 00:44:32.341 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:32.341 "is_configured": true, 00:44:32.341 "data_offset": 2048, 00:44:32.341 "data_size": 63488 00:44:32.341 }, 00:44:32.341 { 00:44:32.341 "name": "BaseBdev2", 00:44:32.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.341 "is_configured": false, 00:44:32.341 "data_offset": 0, 00:44:32.341 "data_size": 0 00:44:32.341 }, 00:44:32.341 { 00:44:32.341 "name": "BaseBdev3", 00:44:32.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.341 "is_configured": false, 00:44:32.341 "data_offset": 0, 00:44:32.341 "data_size": 0 00:44:32.341 }, 00:44:32.341 { 00:44:32.341 "name": "BaseBdev4", 00:44:32.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.341 "is_configured": false, 00:44:32.341 "data_offset": 0, 00:44:32.342 "data_size": 0 00:44:32.342 } 00:44:32.342 ] 00:44:32.342 }' 00:44:32.342 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:32.342 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.600 [2024-12-09 05:35:19.534976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:32.600 [2024-12-09 05:35:19.535049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.600 [2024-12-09 05:35:19.543050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:32.600 [2024-12-09 05:35:19.545707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:32.600 [2024-12-09 05:35:19.545961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:32.600 [2024-12-09 05:35:19.546123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:32.600 [2024-12-09 05:35:19.546307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:32.600 [2024-12-09 05:35:19.546442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:32.600 [2024-12-09 05:35:19.546492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:32.600 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.859 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:32.859 "name": "Existed_Raid", 00:44:32.859 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:32.859 "strip_size_kb": 64, 00:44:32.859 "state": "configuring", 00:44:32.859 "raid_level": "raid5f", 00:44:32.859 "superblock": true, 00:44:32.859 "num_base_bdevs": 4, 00:44:32.859 "num_base_bdevs_discovered": 1, 00:44:32.859 "num_base_bdevs_operational": 4, 00:44:32.859 "base_bdevs_list": [ 00:44:32.859 { 00:44:32.859 "name": "BaseBdev1", 00:44:32.859 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:32.859 "is_configured": true, 00:44:32.859 "data_offset": 2048, 00:44:32.859 "data_size": 63488 00:44:32.859 }, 00:44:32.859 { 00:44:32.860 "name": "BaseBdev2", 00:44:32.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.860 "is_configured": false, 00:44:32.860 "data_offset": 0, 00:44:32.860 "data_size": 0 00:44:32.860 }, 00:44:32.860 { 00:44:32.860 "name": "BaseBdev3", 00:44:32.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.860 "is_configured": false, 00:44:32.860 "data_offset": 0, 00:44:32.860 "data_size": 0 00:44:32.860 }, 00:44:32.860 { 00:44:32.860 "name": "BaseBdev4", 00:44:32.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.860 "is_configured": false, 00:44:32.860 "data_offset": 0, 00:44:32.860 "data_size": 0 00:44:32.860 } 00:44:32.860 ] 00:44:32.860 }' 00:44:32.860 05:35:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:32.860 05:35:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.118 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:44:33.118 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.118 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.377 BaseBdev2 00:44:33.377 [2024-12-09 05:35:20.110103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.377 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.377 [ 00:44:33.377 { 00:44:33.377 "name": "BaseBdev2", 00:44:33.377 "aliases": [ 00:44:33.377 "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5" 00:44:33.377 ], 00:44:33.377 "product_name": "Malloc disk", 00:44:33.377 "block_size": 512, 00:44:33.377 "num_blocks": 65536, 00:44:33.377 "uuid": "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5", 00:44:33.377 "assigned_rate_limits": { 00:44:33.377 "rw_ios_per_sec": 0, 00:44:33.377 "rw_mbytes_per_sec": 0, 00:44:33.377 "r_mbytes_per_sec": 0, 00:44:33.377 "w_mbytes_per_sec": 0 00:44:33.377 }, 00:44:33.377 "claimed": true, 00:44:33.377 "claim_type": "exclusive_write", 00:44:33.377 "zoned": false, 00:44:33.377 "supported_io_types": { 00:44:33.377 "read": true, 00:44:33.377 "write": true, 00:44:33.377 "unmap": true, 00:44:33.377 "flush": true, 00:44:33.377 "reset": true, 00:44:33.377 "nvme_admin": false, 00:44:33.377 "nvme_io": false, 00:44:33.377 "nvme_io_md": false, 00:44:33.377 "write_zeroes": true, 00:44:33.377 "zcopy": true, 00:44:33.377 "get_zone_info": false, 00:44:33.377 "zone_management": false, 00:44:33.377 "zone_append": false, 00:44:33.377 "compare": false, 00:44:33.377 "compare_and_write": false, 00:44:33.377 "abort": true, 00:44:33.377 "seek_hole": false, 00:44:33.377 "seek_data": false, 00:44:33.378 "copy": true, 00:44:33.378 "nvme_iov_md": false 00:44:33.378 }, 00:44:33.378 "memory_domains": [ 00:44:33.378 { 00:44:33.378 "dma_device_id": "system", 00:44:33.378 "dma_device_type": 1 00:44:33.378 }, 00:44:33.378 { 00:44:33.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:33.378 "dma_device_type": 2 00:44:33.378 } 00:44:33.378 ], 00:44:33.378 "driver_specific": {} 00:44:33.378 } 00:44:33.378 ] 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:33.378 "name": "Existed_Raid", 00:44:33.378 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:33.378 "strip_size_kb": 64, 00:44:33.378 "state": "configuring", 00:44:33.378 "raid_level": "raid5f", 00:44:33.378 "superblock": true, 00:44:33.378 "num_base_bdevs": 4, 00:44:33.378 "num_base_bdevs_discovered": 2, 00:44:33.378 "num_base_bdevs_operational": 4, 00:44:33.378 "base_bdevs_list": [ 00:44:33.378 { 00:44:33.378 "name": "BaseBdev1", 00:44:33.378 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:33.378 "is_configured": true, 00:44:33.378 "data_offset": 2048, 00:44:33.378 "data_size": 63488 00:44:33.378 }, 00:44:33.378 { 00:44:33.378 "name": "BaseBdev2", 00:44:33.378 "uuid": "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5", 00:44:33.378 "is_configured": true, 00:44:33.378 "data_offset": 2048, 00:44:33.378 "data_size": 63488 00:44:33.378 }, 00:44:33.378 { 00:44:33.378 "name": "BaseBdev3", 00:44:33.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:33.378 "is_configured": false, 00:44:33.378 "data_offset": 0, 00:44:33.378 "data_size": 0 00:44:33.378 }, 00:44:33.378 { 00:44:33.378 "name": "BaseBdev4", 00:44:33.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:33.378 "is_configured": false, 00:44:33.378 "data_offset": 0, 00:44:33.378 "data_size": 0 00:44:33.378 } 00:44:33.378 ] 00:44:33.378 }' 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:33.378 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.947 BaseBdev3 00:44:33.947 [2024-12-09 05:35:20.705928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.947 [ 00:44:33.947 { 00:44:33.947 "name": "BaseBdev3", 00:44:33.947 "aliases": [ 00:44:33.947 "768f4869-47d3-421f-8601-2fde217745ed" 00:44:33.947 ], 00:44:33.947 "product_name": "Malloc disk", 00:44:33.947 "block_size": 512, 00:44:33.947 "num_blocks": 65536, 00:44:33.947 "uuid": "768f4869-47d3-421f-8601-2fde217745ed", 00:44:33.947 "assigned_rate_limits": { 00:44:33.947 "rw_ios_per_sec": 0, 00:44:33.947 "rw_mbytes_per_sec": 0, 00:44:33.947 "r_mbytes_per_sec": 0, 00:44:33.947 "w_mbytes_per_sec": 0 00:44:33.947 }, 00:44:33.947 "claimed": true, 00:44:33.947 "claim_type": "exclusive_write", 00:44:33.947 "zoned": false, 00:44:33.947 "supported_io_types": { 00:44:33.947 "read": true, 00:44:33.947 "write": true, 00:44:33.947 "unmap": true, 00:44:33.947 "flush": true, 00:44:33.947 "reset": true, 00:44:33.947 "nvme_admin": false, 00:44:33.947 "nvme_io": false, 00:44:33.947 "nvme_io_md": false, 00:44:33.947 "write_zeroes": true, 00:44:33.947 "zcopy": true, 00:44:33.947 "get_zone_info": false, 00:44:33.947 "zone_management": false, 00:44:33.947 "zone_append": false, 00:44:33.947 "compare": false, 00:44:33.947 "compare_and_write": false, 00:44:33.947 "abort": true, 00:44:33.947 "seek_hole": false, 00:44:33.947 "seek_data": false, 00:44:33.947 "copy": true, 00:44:33.947 "nvme_iov_md": false 00:44:33.947 }, 00:44:33.947 "memory_domains": [ 00:44:33.947 { 00:44:33.947 "dma_device_id": "system", 00:44:33.947 "dma_device_type": 1 00:44:33.947 }, 00:44:33.947 { 00:44:33.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:33.947 "dma_device_type": 2 00:44:33.947 } 00:44:33.947 ], 00:44:33.947 "driver_specific": {} 00:44:33.947 } 00:44:33.947 ] 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.947 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:33.947 "name": "Existed_Raid", 00:44:33.947 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:33.947 "strip_size_kb": 64, 00:44:33.947 "state": "configuring", 00:44:33.947 "raid_level": "raid5f", 00:44:33.947 "superblock": true, 00:44:33.947 "num_base_bdevs": 4, 00:44:33.947 "num_base_bdevs_discovered": 3, 00:44:33.947 "num_base_bdevs_operational": 4, 00:44:33.948 "base_bdevs_list": [ 00:44:33.948 { 00:44:33.948 "name": "BaseBdev1", 00:44:33.948 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:33.948 "is_configured": true, 00:44:33.948 "data_offset": 2048, 00:44:33.948 "data_size": 63488 00:44:33.948 }, 00:44:33.948 { 00:44:33.948 "name": "BaseBdev2", 00:44:33.948 "uuid": "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5", 00:44:33.948 "is_configured": true, 00:44:33.948 "data_offset": 2048, 00:44:33.948 "data_size": 63488 00:44:33.948 }, 00:44:33.948 { 00:44:33.948 "name": "BaseBdev3", 00:44:33.948 "uuid": "768f4869-47d3-421f-8601-2fde217745ed", 00:44:33.948 "is_configured": true, 00:44:33.948 "data_offset": 2048, 00:44:33.948 "data_size": 63488 00:44:33.948 }, 00:44:33.948 { 00:44:33.948 "name": "BaseBdev4", 00:44:33.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:33.948 "is_configured": false, 00:44:33.948 "data_offset": 0, 00:44:33.948 "data_size": 0 00:44:33.948 } 00:44:33.948 ] 00:44:33.948 }' 00:44:33.948 05:35:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:33.948 05:35:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:34.517 [2024-12-09 05:35:21.293555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:34.517 BaseBdev4 00:44:34.517 [2024-12-09 05:35:21.294308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:44:34.517 [2024-12-09 05:35:21.294336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:34.517 [2024-12-09 05:35:21.294732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:34.517 [2024-12-09 05:35:21.301624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:44:34.517 [2024-12-09 05:35:21.301838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:44:34.517 [2024-12-09 05:35:21.302328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:34.517 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:34.518 [ 00:44:34.518 { 00:44:34.518 "name": "BaseBdev4", 00:44:34.518 "aliases": [ 00:44:34.518 "92c774be-5a53-4d1b-b40c-5df93905d7db" 00:44:34.518 ], 00:44:34.518 "product_name": "Malloc disk", 00:44:34.518 "block_size": 512, 00:44:34.518 "num_blocks": 65536, 00:44:34.518 "uuid": "92c774be-5a53-4d1b-b40c-5df93905d7db", 00:44:34.518 "assigned_rate_limits": { 00:44:34.518 "rw_ios_per_sec": 0, 00:44:34.518 "rw_mbytes_per_sec": 0, 00:44:34.518 "r_mbytes_per_sec": 0, 00:44:34.518 "w_mbytes_per_sec": 0 00:44:34.518 }, 00:44:34.518 "claimed": true, 00:44:34.518 "claim_type": "exclusive_write", 00:44:34.518 "zoned": false, 00:44:34.518 "supported_io_types": { 00:44:34.518 "read": true, 00:44:34.518 "write": true, 00:44:34.518 "unmap": true, 00:44:34.518 "flush": true, 00:44:34.518 "reset": true, 00:44:34.518 "nvme_admin": false, 00:44:34.518 "nvme_io": false, 00:44:34.518 "nvme_io_md": false, 00:44:34.518 "write_zeroes": true, 00:44:34.518 "zcopy": true, 00:44:34.518 "get_zone_info": false, 00:44:34.518 "zone_management": false, 00:44:34.518 "zone_append": false, 00:44:34.518 "compare": false, 00:44:34.518 "compare_and_write": false, 00:44:34.518 "abort": true, 00:44:34.518 "seek_hole": false, 00:44:34.518 "seek_data": false, 00:44:34.518 "copy": true, 00:44:34.518 "nvme_iov_md": false 00:44:34.518 }, 00:44:34.518 "memory_domains": [ 00:44:34.518 { 00:44:34.518 "dma_device_id": "system", 00:44:34.518 "dma_device_type": 1 00:44:34.518 }, 00:44:34.518 { 00:44:34.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:34.518 "dma_device_type": 2 00:44:34.518 } 00:44:34.518 ], 00:44:34.518 "driver_specific": {} 00:44:34.518 } 00:44:34.518 ] 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:34.518 "name": "Existed_Raid", 00:44:34.518 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:34.518 "strip_size_kb": 64, 00:44:34.518 "state": "online", 00:44:34.518 "raid_level": "raid5f", 00:44:34.518 "superblock": true, 00:44:34.518 "num_base_bdevs": 4, 00:44:34.518 "num_base_bdevs_discovered": 4, 00:44:34.518 "num_base_bdevs_operational": 4, 00:44:34.518 "base_bdevs_list": [ 00:44:34.518 { 00:44:34.518 "name": "BaseBdev1", 00:44:34.518 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:34.518 "is_configured": true, 00:44:34.518 "data_offset": 2048, 00:44:34.518 "data_size": 63488 00:44:34.518 }, 00:44:34.518 { 00:44:34.518 "name": "BaseBdev2", 00:44:34.518 "uuid": "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5", 00:44:34.518 "is_configured": true, 00:44:34.518 "data_offset": 2048, 00:44:34.518 "data_size": 63488 00:44:34.518 }, 00:44:34.518 { 00:44:34.518 "name": "BaseBdev3", 00:44:34.518 "uuid": "768f4869-47d3-421f-8601-2fde217745ed", 00:44:34.518 "is_configured": true, 00:44:34.518 "data_offset": 2048, 00:44:34.518 "data_size": 63488 00:44:34.518 }, 00:44:34.518 { 00:44:34.518 "name": "BaseBdev4", 00:44:34.518 "uuid": "92c774be-5a53-4d1b-b40c-5df93905d7db", 00:44:34.518 "is_configured": true, 00:44:34.518 "data_offset": 2048, 00:44:34.518 "data_size": 63488 00:44:34.518 } 00:44:34.518 ] 00:44:34.518 }' 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:34.518 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.086 [2024-12-09 05:35:21.870292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:44:35.086 "name": "Existed_Raid", 00:44:35.086 "aliases": [ 00:44:35.086 "52c0f041-54d9-4ae1-8607-32b925e84ba7" 00:44:35.086 ], 00:44:35.086 "product_name": "Raid Volume", 00:44:35.086 "block_size": 512, 00:44:35.086 "num_blocks": 190464, 00:44:35.086 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:35.086 "assigned_rate_limits": { 00:44:35.086 "rw_ios_per_sec": 0, 00:44:35.086 "rw_mbytes_per_sec": 0, 00:44:35.086 "r_mbytes_per_sec": 0, 00:44:35.086 "w_mbytes_per_sec": 0 00:44:35.086 }, 00:44:35.086 "claimed": false, 00:44:35.086 "zoned": false, 00:44:35.086 "supported_io_types": { 00:44:35.086 "read": true, 00:44:35.086 "write": true, 00:44:35.086 "unmap": false, 00:44:35.086 "flush": false, 00:44:35.086 "reset": true, 00:44:35.086 "nvme_admin": false, 00:44:35.086 "nvme_io": false, 00:44:35.086 "nvme_io_md": false, 00:44:35.086 "write_zeroes": true, 00:44:35.086 "zcopy": false, 00:44:35.086 "get_zone_info": false, 00:44:35.086 "zone_management": false, 00:44:35.086 "zone_append": false, 00:44:35.086 "compare": false, 00:44:35.086 "compare_and_write": false, 00:44:35.086 "abort": false, 00:44:35.086 "seek_hole": false, 00:44:35.086 "seek_data": false, 00:44:35.086 "copy": false, 00:44:35.086 "nvme_iov_md": false 00:44:35.086 }, 00:44:35.086 "driver_specific": { 00:44:35.086 "raid": { 00:44:35.086 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:35.086 "strip_size_kb": 64, 00:44:35.086 "state": "online", 00:44:35.086 "raid_level": "raid5f", 00:44:35.086 "superblock": true, 00:44:35.086 "num_base_bdevs": 4, 00:44:35.086 "num_base_bdevs_discovered": 4, 00:44:35.086 "num_base_bdevs_operational": 4, 00:44:35.086 "base_bdevs_list": [ 00:44:35.086 { 00:44:35.086 "name": "BaseBdev1", 00:44:35.086 "uuid": "c7c710e1-31ec-4b7c-8b9a-6216498b45fb", 00:44:35.086 "is_configured": true, 00:44:35.086 "data_offset": 2048, 00:44:35.086 "data_size": 63488 00:44:35.086 }, 00:44:35.086 { 00:44:35.086 "name": "BaseBdev2", 00:44:35.086 "uuid": "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5", 00:44:35.086 "is_configured": true, 00:44:35.086 "data_offset": 2048, 00:44:35.086 "data_size": 63488 00:44:35.086 }, 00:44:35.086 { 00:44:35.086 "name": "BaseBdev3", 00:44:35.086 "uuid": "768f4869-47d3-421f-8601-2fde217745ed", 00:44:35.086 "is_configured": true, 00:44:35.086 "data_offset": 2048, 00:44:35.086 "data_size": 63488 00:44:35.086 }, 00:44:35.086 { 00:44:35.086 "name": "BaseBdev4", 00:44:35.086 "uuid": "92c774be-5a53-4d1b-b40c-5df93905d7db", 00:44:35.086 "is_configured": true, 00:44:35.086 "data_offset": 2048, 00:44:35.086 "data_size": 63488 00:44:35.086 } 00:44:35.086 ] 00:44:35.086 } 00:44:35.086 } 00:44:35.086 }' 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:44:35.086 BaseBdev2 00:44:35.086 BaseBdev3 00:44:35.086 BaseBdev4' 00:44:35.086 05:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:35.086 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:44:35.086 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:35.086 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:44:35.086 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:35.086 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.087 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.087 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.347 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.347 [2024-12-09 05:35:22.242229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:35.607 "name": "Existed_Raid", 00:44:35.607 "uuid": "52c0f041-54d9-4ae1-8607-32b925e84ba7", 00:44:35.607 "strip_size_kb": 64, 00:44:35.607 "state": "online", 00:44:35.607 "raid_level": "raid5f", 00:44:35.607 "superblock": true, 00:44:35.607 "num_base_bdevs": 4, 00:44:35.607 "num_base_bdevs_discovered": 3, 00:44:35.607 "num_base_bdevs_operational": 3, 00:44:35.607 "base_bdevs_list": [ 00:44:35.607 { 00:44:35.607 "name": null, 00:44:35.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:35.607 "is_configured": false, 00:44:35.607 "data_offset": 0, 00:44:35.607 "data_size": 63488 00:44:35.607 }, 00:44:35.607 { 00:44:35.607 "name": "BaseBdev2", 00:44:35.607 "uuid": "79c4d1a0-8b17-4159-8c66-1e9ae2531ba5", 00:44:35.607 "is_configured": true, 00:44:35.607 "data_offset": 2048, 00:44:35.607 "data_size": 63488 00:44:35.607 }, 00:44:35.607 { 00:44:35.607 "name": "BaseBdev3", 00:44:35.607 "uuid": "768f4869-47d3-421f-8601-2fde217745ed", 00:44:35.607 "is_configured": true, 00:44:35.607 "data_offset": 2048, 00:44:35.607 "data_size": 63488 00:44:35.607 }, 00:44:35.607 { 00:44:35.607 "name": "BaseBdev4", 00:44:35.607 "uuid": "92c774be-5a53-4d1b-b40c-5df93905d7db", 00:44:35.607 "is_configured": true, 00:44:35.607 "data_offset": 2048, 00:44:35.607 "data_size": 63488 00:44:35.607 } 00:44:35.607 ] 00:44:35.607 }' 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:35.607 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:35.866 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.126 [2024-12-09 05:35:22.873658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:44:36.126 [2024-12-09 05:35:22.873924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:36.126 [2024-12-09 05:35:22.951360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.126 05:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.126 [2024-12-09 05:35:23.011391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:44:36.126 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:36.385 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:36.385 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:44:36.385 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.385 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.385 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.386 [2024-12-09 05:35:23.152675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:44:36.386 [2024-12-09 05:35:23.152925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.386 BaseBdev2 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.386 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.386 [ 00:44:36.386 { 00:44:36.386 "name": "BaseBdev2", 00:44:36.646 "aliases": [ 00:44:36.646 "b41f6f92-e968-47fa-8cc1-b9893e74ba83" 00:44:36.646 ], 00:44:36.646 "product_name": "Malloc disk", 00:44:36.646 "block_size": 512, 00:44:36.646 "num_blocks": 65536, 00:44:36.646 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:36.646 "assigned_rate_limits": { 00:44:36.646 "rw_ios_per_sec": 0, 00:44:36.646 "rw_mbytes_per_sec": 0, 00:44:36.646 "r_mbytes_per_sec": 0, 00:44:36.646 "w_mbytes_per_sec": 0 00:44:36.646 }, 00:44:36.646 "claimed": false, 00:44:36.646 "zoned": false, 00:44:36.646 "supported_io_types": { 00:44:36.646 "read": true, 00:44:36.646 "write": true, 00:44:36.646 "unmap": true, 00:44:36.646 "flush": true, 00:44:36.646 "reset": true, 00:44:36.646 "nvme_admin": false, 00:44:36.646 "nvme_io": false, 00:44:36.646 "nvme_io_md": false, 00:44:36.646 "write_zeroes": true, 00:44:36.646 "zcopy": true, 00:44:36.646 "get_zone_info": false, 00:44:36.646 "zone_management": false, 00:44:36.646 "zone_append": false, 00:44:36.646 "compare": false, 00:44:36.646 "compare_and_write": false, 00:44:36.646 "abort": true, 00:44:36.646 "seek_hole": false, 00:44:36.646 "seek_data": false, 00:44:36.646 "copy": true, 00:44:36.646 "nvme_iov_md": false 00:44:36.646 }, 00:44:36.646 "memory_domains": [ 00:44:36.646 { 00:44:36.646 "dma_device_id": "system", 00:44:36.646 "dma_device_type": 1 00:44:36.646 }, 00:44:36.646 { 00:44:36.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:36.646 "dma_device_type": 2 00:44:36.646 } 00:44:36.646 ], 00:44:36.646 "driver_specific": {} 00:44:36.646 } 00:44:36.646 ] 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.646 BaseBdev3 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.646 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.646 [ 00:44:36.646 { 00:44:36.646 "name": "BaseBdev3", 00:44:36.646 "aliases": [ 00:44:36.646 "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7" 00:44:36.646 ], 00:44:36.646 "product_name": "Malloc disk", 00:44:36.646 "block_size": 512, 00:44:36.646 "num_blocks": 65536, 00:44:36.646 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:36.646 "assigned_rate_limits": { 00:44:36.646 "rw_ios_per_sec": 0, 00:44:36.646 "rw_mbytes_per_sec": 0, 00:44:36.646 "r_mbytes_per_sec": 0, 00:44:36.646 "w_mbytes_per_sec": 0 00:44:36.646 }, 00:44:36.646 "claimed": false, 00:44:36.646 "zoned": false, 00:44:36.646 "supported_io_types": { 00:44:36.646 "read": true, 00:44:36.646 "write": true, 00:44:36.646 "unmap": true, 00:44:36.646 "flush": true, 00:44:36.646 "reset": true, 00:44:36.646 "nvme_admin": false, 00:44:36.646 "nvme_io": false, 00:44:36.646 "nvme_io_md": false, 00:44:36.646 "write_zeroes": true, 00:44:36.646 "zcopy": true, 00:44:36.646 "get_zone_info": false, 00:44:36.646 "zone_management": false, 00:44:36.646 "zone_append": false, 00:44:36.646 "compare": false, 00:44:36.646 "compare_and_write": false, 00:44:36.646 "abort": true, 00:44:36.646 "seek_hole": false, 00:44:36.646 "seek_data": false, 00:44:36.646 "copy": true, 00:44:36.646 "nvme_iov_md": false 00:44:36.646 }, 00:44:36.646 "memory_domains": [ 00:44:36.646 { 00:44:36.646 "dma_device_id": "system", 00:44:36.647 "dma_device_type": 1 00:44:36.647 }, 00:44:36.647 { 00:44:36.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:36.647 "dma_device_type": 2 00:44:36.647 } 00:44:36.647 ], 00:44:36.647 "driver_specific": {} 00:44:36.647 } 00:44:36.647 ] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.647 BaseBdev4 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.647 [ 00:44:36.647 { 00:44:36.647 "name": "BaseBdev4", 00:44:36.647 "aliases": [ 00:44:36.647 "db66645b-2d00-490d-b0f0-98ba65bc4b3e" 00:44:36.647 ], 00:44:36.647 "product_name": "Malloc disk", 00:44:36.647 "block_size": 512, 00:44:36.647 "num_blocks": 65536, 00:44:36.647 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:36.647 "assigned_rate_limits": { 00:44:36.647 "rw_ios_per_sec": 0, 00:44:36.647 "rw_mbytes_per_sec": 0, 00:44:36.647 "r_mbytes_per_sec": 0, 00:44:36.647 "w_mbytes_per_sec": 0 00:44:36.647 }, 00:44:36.647 "claimed": false, 00:44:36.647 "zoned": false, 00:44:36.647 "supported_io_types": { 00:44:36.647 "read": true, 00:44:36.647 "write": true, 00:44:36.647 "unmap": true, 00:44:36.647 "flush": true, 00:44:36.647 "reset": true, 00:44:36.647 "nvme_admin": false, 00:44:36.647 "nvme_io": false, 00:44:36.647 "nvme_io_md": false, 00:44:36.647 "write_zeroes": true, 00:44:36.647 "zcopy": true, 00:44:36.647 "get_zone_info": false, 00:44:36.647 "zone_management": false, 00:44:36.647 "zone_append": false, 00:44:36.647 "compare": false, 00:44:36.647 "compare_and_write": false, 00:44:36.647 "abort": true, 00:44:36.647 "seek_hole": false, 00:44:36.647 "seek_data": false, 00:44:36.647 "copy": true, 00:44:36.647 "nvme_iov_md": false 00:44:36.647 }, 00:44:36.647 "memory_domains": [ 00:44:36.647 { 00:44:36.647 "dma_device_id": "system", 00:44:36.647 "dma_device_type": 1 00:44:36.647 }, 00:44:36.647 { 00:44:36.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:36.647 "dma_device_type": 2 00:44:36.647 } 00:44:36.647 ], 00:44:36.647 "driver_specific": {} 00:44:36.647 } 00:44:36.647 ] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.647 [2024-12-09 05:35:23.516418] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:36.647 [2024-12-09 05:35:23.516642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:36.647 [2024-12-09 05:35:23.516800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:36.647 [2024-12-09 05:35:23.519508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:36.647 [2024-12-09 05:35:23.519769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:36.647 "name": "Existed_Raid", 00:44:36.647 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:36.647 "strip_size_kb": 64, 00:44:36.647 "state": "configuring", 00:44:36.647 "raid_level": "raid5f", 00:44:36.647 "superblock": true, 00:44:36.647 "num_base_bdevs": 4, 00:44:36.647 "num_base_bdevs_discovered": 3, 00:44:36.647 "num_base_bdevs_operational": 4, 00:44:36.647 "base_bdevs_list": [ 00:44:36.647 { 00:44:36.647 "name": "BaseBdev1", 00:44:36.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:36.647 "is_configured": false, 00:44:36.647 "data_offset": 0, 00:44:36.647 "data_size": 0 00:44:36.647 }, 00:44:36.647 { 00:44:36.647 "name": "BaseBdev2", 00:44:36.647 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:36.647 "is_configured": true, 00:44:36.647 "data_offset": 2048, 00:44:36.647 "data_size": 63488 00:44:36.647 }, 00:44:36.647 { 00:44:36.647 "name": "BaseBdev3", 00:44:36.647 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:36.647 "is_configured": true, 00:44:36.647 "data_offset": 2048, 00:44:36.647 "data_size": 63488 00:44:36.647 }, 00:44:36.647 { 00:44:36.647 "name": "BaseBdev4", 00:44:36.647 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:36.647 "is_configured": true, 00:44:36.647 "data_offset": 2048, 00:44:36.647 "data_size": 63488 00:44:36.647 } 00:44:36.647 ] 00:44:36.647 }' 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:36.647 05:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.216 [2024-12-09 05:35:24.048617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:37.216 "name": "Existed_Raid", 00:44:37.216 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:37.216 "strip_size_kb": 64, 00:44:37.216 "state": "configuring", 00:44:37.216 "raid_level": "raid5f", 00:44:37.216 "superblock": true, 00:44:37.216 "num_base_bdevs": 4, 00:44:37.216 "num_base_bdevs_discovered": 2, 00:44:37.216 "num_base_bdevs_operational": 4, 00:44:37.216 "base_bdevs_list": [ 00:44:37.216 { 00:44:37.216 "name": "BaseBdev1", 00:44:37.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:37.216 "is_configured": false, 00:44:37.216 "data_offset": 0, 00:44:37.216 "data_size": 0 00:44:37.216 }, 00:44:37.216 { 00:44:37.216 "name": null, 00:44:37.216 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:37.216 "is_configured": false, 00:44:37.216 "data_offset": 0, 00:44:37.216 "data_size": 63488 00:44:37.216 }, 00:44:37.216 { 00:44:37.216 "name": "BaseBdev3", 00:44:37.216 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:37.216 "is_configured": true, 00:44:37.216 "data_offset": 2048, 00:44:37.216 "data_size": 63488 00:44:37.216 }, 00:44:37.216 { 00:44:37.216 "name": "BaseBdev4", 00:44:37.216 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:37.216 "is_configured": true, 00:44:37.216 "data_offset": 2048, 00:44:37.216 "data_size": 63488 00:44:37.216 } 00:44:37.216 ] 00:44:37.216 }' 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:37.216 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.782 [2024-12-09 05:35:24.662837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:37.782 BaseBdev1 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.782 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.782 [ 00:44:37.782 { 00:44:37.782 "name": "BaseBdev1", 00:44:37.782 "aliases": [ 00:44:37.782 "12a59ec4-dfe1-46cf-808f-d3a5718a8cac" 00:44:37.782 ], 00:44:37.782 "product_name": "Malloc disk", 00:44:37.782 "block_size": 512, 00:44:37.782 "num_blocks": 65536, 00:44:37.782 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:37.782 "assigned_rate_limits": { 00:44:37.782 "rw_ios_per_sec": 0, 00:44:37.782 "rw_mbytes_per_sec": 0, 00:44:37.782 "r_mbytes_per_sec": 0, 00:44:37.782 "w_mbytes_per_sec": 0 00:44:37.782 }, 00:44:37.782 "claimed": true, 00:44:37.782 "claim_type": "exclusive_write", 00:44:37.782 "zoned": false, 00:44:37.782 "supported_io_types": { 00:44:37.782 "read": true, 00:44:37.782 "write": true, 00:44:37.782 "unmap": true, 00:44:37.782 "flush": true, 00:44:37.782 "reset": true, 00:44:37.782 "nvme_admin": false, 00:44:37.782 "nvme_io": false, 00:44:37.782 "nvme_io_md": false, 00:44:37.782 "write_zeroes": true, 00:44:37.782 "zcopy": true, 00:44:37.782 "get_zone_info": false, 00:44:37.782 "zone_management": false, 00:44:37.782 "zone_append": false, 00:44:37.782 "compare": false, 00:44:37.783 "compare_and_write": false, 00:44:37.783 "abort": true, 00:44:37.783 "seek_hole": false, 00:44:37.783 "seek_data": false, 00:44:37.783 "copy": true, 00:44:37.783 "nvme_iov_md": false 00:44:37.783 }, 00:44:37.783 "memory_domains": [ 00:44:37.783 { 00:44:37.783 "dma_device_id": "system", 00:44:37.783 "dma_device_type": 1 00:44:37.783 }, 00:44:37.783 { 00:44:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:37.783 "dma_device_type": 2 00:44:37.783 } 00:44:37.783 ], 00:44:37.783 "driver_specific": {} 00:44:37.783 } 00:44:37.783 ] 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:37.783 "name": "Existed_Raid", 00:44:37.783 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:37.783 "strip_size_kb": 64, 00:44:37.783 "state": "configuring", 00:44:37.783 "raid_level": "raid5f", 00:44:37.783 "superblock": true, 00:44:37.783 "num_base_bdevs": 4, 00:44:37.783 "num_base_bdevs_discovered": 3, 00:44:37.783 "num_base_bdevs_operational": 4, 00:44:37.783 "base_bdevs_list": [ 00:44:37.783 { 00:44:37.783 "name": "BaseBdev1", 00:44:37.783 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:37.783 "is_configured": true, 00:44:37.783 "data_offset": 2048, 00:44:37.783 "data_size": 63488 00:44:37.783 }, 00:44:37.783 { 00:44:37.783 "name": null, 00:44:37.783 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:37.783 "is_configured": false, 00:44:37.783 "data_offset": 0, 00:44:37.783 "data_size": 63488 00:44:37.783 }, 00:44:37.783 { 00:44:37.783 "name": "BaseBdev3", 00:44:37.783 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:37.783 "is_configured": true, 00:44:37.783 "data_offset": 2048, 00:44:37.783 "data_size": 63488 00:44:37.783 }, 00:44:37.783 { 00:44:37.783 "name": "BaseBdev4", 00:44:37.783 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:37.783 "is_configured": true, 00:44:37.783 "data_offset": 2048, 00:44:37.783 "data_size": 63488 00:44:37.783 } 00:44:37.783 ] 00:44:37.783 }' 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:37.783 05:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:38.358 [2024-12-09 05:35:25.275081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:38.358 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.616 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:38.616 "name": "Existed_Raid", 00:44:38.616 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:38.616 "strip_size_kb": 64, 00:44:38.616 "state": "configuring", 00:44:38.616 "raid_level": "raid5f", 00:44:38.616 "superblock": true, 00:44:38.616 "num_base_bdevs": 4, 00:44:38.616 "num_base_bdevs_discovered": 2, 00:44:38.616 "num_base_bdevs_operational": 4, 00:44:38.616 "base_bdevs_list": [ 00:44:38.616 { 00:44:38.616 "name": "BaseBdev1", 00:44:38.616 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:38.616 "is_configured": true, 00:44:38.616 "data_offset": 2048, 00:44:38.616 "data_size": 63488 00:44:38.616 }, 00:44:38.616 { 00:44:38.616 "name": null, 00:44:38.616 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:38.616 "is_configured": false, 00:44:38.616 "data_offset": 0, 00:44:38.616 "data_size": 63488 00:44:38.616 }, 00:44:38.616 { 00:44:38.616 "name": null, 00:44:38.616 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:38.616 "is_configured": false, 00:44:38.616 "data_offset": 0, 00:44:38.616 "data_size": 63488 00:44:38.616 }, 00:44:38.616 { 00:44:38.616 "name": "BaseBdev4", 00:44:38.616 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:38.616 "is_configured": true, 00:44:38.616 "data_offset": 2048, 00:44:38.616 "data_size": 63488 00:44:38.616 } 00:44:38.616 ] 00:44:38.616 }' 00:44:38.616 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:38.616 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:38.876 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:38.876 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.876 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:44:38.876 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:38.876 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:39.136 [2024-12-09 05:35:25.851227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:39.136 "name": "Existed_Raid", 00:44:39.136 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:39.136 "strip_size_kb": 64, 00:44:39.136 "state": "configuring", 00:44:39.136 "raid_level": "raid5f", 00:44:39.136 "superblock": true, 00:44:39.136 "num_base_bdevs": 4, 00:44:39.136 "num_base_bdevs_discovered": 3, 00:44:39.136 "num_base_bdevs_operational": 4, 00:44:39.136 "base_bdevs_list": [ 00:44:39.136 { 00:44:39.136 "name": "BaseBdev1", 00:44:39.136 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:39.136 "is_configured": true, 00:44:39.136 "data_offset": 2048, 00:44:39.136 "data_size": 63488 00:44:39.136 }, 00:44:39.136 { 00:44:39.136 "name": null, 00:44:39.136 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:39.136 "is_configured": false, 00:44:39.136 "data_offset": 0, 00:44:39.136 "data_size": 63488 00:44:39.136 }, 00:44:39.136 { 00:44:39.136 "name": "BaseBdev3", 00:44:39.136 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:39.136 "is_configured": true, 00:44:39.136 "data_offset": 2048, 00:44:39.136 "data_size": 63488 00:44:39.136 }, 00:44:39.136 { 00:44:39.136 "name": "BaseBdev4", 00:44:39.136 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:39.136 "is_configured": true, 00:44:39.136 "data_offset": 2048, 00:44:39.136 "data_size": 63488 00:44:39.136 } 00:44:39.136 ] 00:44:39.136 }' 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:39.136 05:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:44:39.702 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:39.703 [2024-12-09 05:35:26.439517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:39.703 "name": "Existed_Raid", 00:44:39.703 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:39.703 "strip_size_kb": 64, 00:44:39.703 "state": "configuring", 00:44:39.703 "raid_level": "raid5f", 00:44:39.703 "superblock": true, 00:44:39.703 "num_base_bdevs": 4, 00:44:39.703 "num_base_bdevs_discovered": 2, 00:44:39.703 "num_base_bdevs_operational": 4, 00:44:39.703 "base_bdevs_list": [ 00:44:39.703 { 00:44:39.703 "name": null, 00:44:39.703 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:39.703 "is_configured": false, 00:44:39.703 "data_offset": 0, 00:44:39.703 "data_size": 63488 00:44:39.703 }, 00:44:39.703 { 00:44:39.703 "name": null, 00:44:39.703 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:39.703 "is_configured": false, 00:44:39.703 "data_offset": 0, 00:44:39.703 "data_size": 63488 00:44:39.703 }, 00:44:39.703 { 00:44:39.703 "name": "BaseBdev3", 00:44:39.703 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:39.703 "is_configured": true, 00:44:39.703 "data_offset": 2048, 00:44:39.703 "data_size": 63488 00:44:39.703 }, 00:44:39.703 { 00:44:39.703 "name": "BaseBdev4", 00:44:39.703 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:39.703 "is_configured": true, 00:44:39.703 "data_offset": 2048, 00:44:39.703 "data_size": 63488 00:44:39.703 } 00:44:39.703 ] 00:44:39.703 }' 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:39.703 05:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:44:40.269 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.270 [2024-12-09 05:35:27.105279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:40.270 "name": "Existed_Raid", 00:44:40.270 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:40.270 "strip_size_kb": 64, 00:44:40.270 "state": "configuring", 00:44:40.270 "raid_level": "raid5f", 00:44:40.270 "superblock": true, 00:44:40.270 "num_base_bdevs": 4, 00:44:40.270 "num_base_bdevs_discovered": 3, 00:44:40.270 "num_base_bdevs_operational": 4, 00:44:40.270 "base_bdevs_list": [ 00:44:40.270 { 00:44:40.270 "name": null, 00:44:40.270 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:40.270 "is_configured": false, 00:44:40.270 "data_offset": 0, 00:44:40.270 "data_size": 63488 00:44:40.270 }, 00:44:40.270 { 00:44:40.270 "name": "BaseBdev2", 00:44:40.270 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:40.270 "is_configured": true, 00:44:40.270 "data_offset": 2048, 00:44:40.270 "data_size": 63488 00:44:40.270 }, 00:44:40.270 { 00:44:40.270 "name": "BaseBdev3", 00:44:40.270 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:40.270 "is_configured": true, 00:44:40.270 "data_offset": 2048, 00:44:40.270 "data_size": 63488 00:44:40.270 }, 00:44:40.270 { 00:44:40.270 "name": "BaseBdev4", 00:44:40.270 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:40.270 "is_configured": true, 00:44:40.270 "data_offset": 2048, 00:44:40.270 "data_size": 63488 00:44:40.270 } 00:44:40.270 ] 00:44:40.270 }' 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:40.270 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 12a59ec4-dfe1-46cf-808f-d3a5718a8cac 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.835 [2024-12-09 05:35:27.752101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:44:40.835 NewBaseBdev 00:44:40.835 [2024-12-09 05:35:27.752716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:44:40.835 [2024-12-09 05:35:27.752750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:40.835 [2024-12-09 05:35:27.753119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.835 [2024-12-09 05:35:27.759695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:44:40.835 [2024-12-09 05:35:27.759878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:44:40.835 [2024-12-09 05:35:27.760349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.835 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:40.835 [ 00:44:40.835 { 00:44:40.835 "name": "NewBaseBdev", 00:44:40.835 "aliases": [ 00:44:40.835 "12a59ec4-dfe1-46cf-808f-d3a5718a8cac" 00:44:40.835 ], 00:44:40.835 "product_name": "Malloc disk", 00:44:40.835 "block_size": 512, 00:44:40.835 "num_blocks": 65536, 00:44:40.835 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:40.835 "assigned_rate_limits": { 00:44:40.835 "rw_ios_per_sec": 0, 00:44:40.835 "rw_mbytes_per_sec": 0, 00:44:40.835 "r_mbytes_per_sec": 0, 00:44:40.835 "w_mbytes_per_sec": 0 00:44:40.835 }, 00:44:40.835 "claimed": true, 00:44:40.835 "claim_type": "exclusive_write", 00:44:40.835 "zoned": false, 00:44:40.835 "supported_io_types": { 00:44:40.835 "read": true, 00:44:40.835 "write": true, 00:44:40.835 "unmap": true, 00:44:40.835 "flush": true, 00:44:40.835 "reset": true, 00:44:40.835 "nvme_admin": false, 00:44:40.835 "nvme_io": false, 00:44:40.835 "nvme_io_md": false, 00:44:40.835 "write_zeroes": true, 00:44:40.836 "zcopy": true, 00:44:40.836 "get_zone_info": false, 00:44:40.836 "zone_management": false, 00:44:40.836 "zone_append": false, 00:44:40.836 "compare": false, 00:44:40.836 "compare_and_write": false, 00:44:40.836 "abort": true, 00:44:40.836 "seek_hole": false, 00:44:40.836 "seek_data": false, 00:44:40.836 "copy": true, 00:44:40.836 "nvme_iov_md": false 00:44:40.836 }, 00:44:40.836 "memory_domains": [ 00:44:40.836 { 00:44:40.836 "dma_device_id": "system", 00:44:40.836 "dma_device_type": 1 00:44:40.836 }, 00:44:40.836 { 00:44:40.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:40.836 "dma_device_type": 2 00:44:40.836 } 00:44:40.836 ], 00:44:40.836 "driver_specific": {} 00:44:40.836 } 00:44:40.836 ] 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.836 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.094 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.094 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:41.094 "name": "Existed_Raid", 00:44:41.094 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:41.094 "strip_size_kb": 64, 00:44:41.094 "state": "online", 00:44:41.094 "raid_level": "raid5f", 00:44:41.094 "superblock": true, 00:44:41.094 "num_base_bdevs": 4, 00:44:41.094 "num_base_bdevs_discovered": 4, 00:44:41.094 "num_base_bdevs_operational": 4, 00:44:41.094 "base_bdevs_list": [ 00:44:41.094 { 00:44:41.094 "name": "NewBaseBdev", 00:44:41.094 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:41.094 "is_configured": true, 00:44:41.094 "data_offset": 2048, 00:44:41.094 "data_size": 63488 00:44:41.094 }, 00:44:41.094 { 00:44:41.094 "name": "BaseBdev2", 00:44:41.094 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:41.094 "is_configured": true, 00:44:41.094 "data_offset": 2048, 00:44:41.094 "data_size": 63488 00:44:41.094 }, 00:44:41.094 { 00:44:41.094 "name": "BaseBdev3", 00:44:41.094 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:41.094 "is_configured": true, 00:44:41.094 "data_offset": 2048, 00:44:41.094 "data_size": 63488 00:44:41.094 }, 00:44:41.094 { 00:44:41.094 "name": "BaseBdev4", 00:44:41.094 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:41.094 "is_configured": true, 00:44:41.094 "data_offset": 2048, 00:44:41.094 "data_size": 63488 00:44:41.094 } 00:44:41.094 ] 00:44:41.094 }' 00:44:41.094 05:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:41.094 05:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:44:41.351 [2024-12-09 05:35:28.256338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.351 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:44:41.351 "name": "Existed_Raid", 00:44:41.351 "aliases": [ 00:44:41.351 "1f238f13-1774-41a5-9ba8-e70c2c719596" 00:44:41.351 ], 00:44:41.351 "product_name": "Raid Volume", 00:44:41.351 "block_size": 512, 00:44:41.351 "num_blocks": 190464, 00:44:41.351 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:41.351 "assigned_rate_limits": { 00:44:41.351 "rw_ios_per_sec": 0, 00:44:41.351 "rw_mbytes_per_sec": 0, 00:44:41.351 "r_mbytes_per_sec": 0, 00:44:41.351 "w_mbytes_per_sec": 0 00:44:41.351 }, 00:44:41.351 "claimed": false, 00:44:41.352 "zoned": false, 00:44:41.352 "supported_io_types": { 00:44:41.352 "read": true, 00:44:41.352 "write": true, 00:44:41.352 "unmap": false, 00:44:41.352 "flush": false, 00:44:41.352 "reset": true, 00:44:41.352 "nvme_admin": false, 00:44:41.352 "nvme_io": false, 00:44:41.352 "nvme_io_md": false, 00:44:41.352 "write_zeroes": true, 00:44:41.352 "zcopy": false, 00:44:41.352 "get_zone_info": false, 00:44:41.352 "zone_management": false, 00:44:41.352 "zone_append": false, 00:44:41.352 "compare": false, 00:44:41.352 "compare_and_write": false, 00:44:41.352 "abort": false, 00:44:41.352 "seek_hole": false, 00:44:41.352 "seek_data": false, 00:44:41.352 "copy": false, 00:44:41.352 "nvme_iov_md": false 00:44:41.352 }, 00:44:41.352 "driver_specific": { 00:44:41.352 "raid": { 00:44:41.352 "uuid": "1f238f13-1774-41a5-9ba8-e70c2c719596", 00:44:41.352 "strip_size_kb": 64, 00:44:41.352 "state": "online", 00:44:41.352 "raid_level": "raid5f", 00:44:41.352 "superblock": true, 00:44:41.352 "num_base_bdevs": 4, 00:44:41.352 "num_base_bdevs_discovered": 4, 00:44:41.352 "num_base_bdevs_operational": 4, 00:44:41.352 "base_bdevs_list": [ 00:44:41.352 { 00:44:41.352 "name": "NewBaseBdev", 00:44:41.352 "uuid": "12a59ec4-dfe1-46cf-808f-d3a5718a8cac", 00:44:41.352 "is_configured": true, 00:44:41.352 "data_offset": 2048, 00:44:41.352 "data_size": 63488 00:44:41.352 }, 00:44:41.352 { 00:44:41.352 "name": "BaseBdev2", 00:44:41.352 "uuid": "b41f6f92-e968-47fa-8cc1-b9893e74ba83", 00:44:41.352 "is_configured": true, 00:44:41.352 "data_offset": 2048, 00:44:41.352 "data_size": 63488 00:44:41.352 }, 00:44:41.352 { 00:44:41.352 "name": "BaseBdev3", 00:44:41.352 "uuid": "16fc5c2b-5995-4225-bd58-ad8e5b43e7e7", 00:44:41.352 "is_configured": true, 00:44:41.352 "data_offset": 2048, 00:44:41.352 "data_size": 63488 00:44:41.352 }, 00:44:41.352 { 00:44:41.352 "name": "BaseBdev4", 00:44:41.352 "uuid": "db66645b-2d00-490d-b0f0-98ba65bc4b3e", 00:44:41.352 "is_configured": true, 00:44:41.352 "data_offset": 2048, 00:44:41.352 "data_size": 63488 00:44:41.352 } 00:44:41.352 ] 00:44:41.352 } 00:44:41.352 } 00:44:41.352 }' 00:44:41.352 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:44:41.610 BaseBdev2 00:44:41.610 BaseBdev3 00:44:41.610 BaseBdev4' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:41.610 [2024-12-09 05:35:28.576126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:41.610 [2024-12-09 05:35:28.576304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:41.610 [2024-12-09 05:35:28.576533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:41.610 [2024-12-09 05:35:28.577074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:41.610 [2024-12-09 05:35:28.577104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83957 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83957 ']' 00:44:41.610 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83957 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83957 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83957' 00:44:41.869 killing process with pid 83957 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83957 00:44:41.869 [2024-12-09 05:35:28.612651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:41.869 05:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83957 00:44:42.126 [2024-12-09 05:35:28.991527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:43.494 05:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:44:43.495 00:44:43.495 real 0m12.845s 00:44:43.495 user 0m21.073s 00:44:43.495 sys 0m1.888s 00:44:43.495 05:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:43.495 ************************************ 00:44:43.495 END TEST raid5f_state_function_test_sb 00:44:43.495 ************************************ 00:44:43.495 05:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:44:43.495 05:35:30 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:44:43.495 05:35:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:43.495 05:35:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:43.495 05:35:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:44:43.495 ************************************ 00:44:43.495 START TEST raid5f_superblock_test 00:44:43.495 ************************************ 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:44:43.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84640 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84640 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84640 ']' 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:43.495 05:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:43.495 [2024-12-09 05:35:30.311782] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:44:43.495 [2024-12-09 05:35:30.311989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84640 ] 00:44:43.750 [2024-12-09 05:35:30.493185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.750 [2024-12-09 05:35:30.622958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:44.008 [2024-12-09 05:35:30.819976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:44.008 [2024-12-09 05:35:30.820099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.265 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.522 malloc1 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.522 [2024-12-09 05:35:31.267534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:44:44.522 [2024-12-09 05:35:31.267793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:44.522 [2024-12-09 05:35:31.267894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:44:44.522 [2024-12-09 05:35:31.268212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:44.522 [2024-12-09 05:35:31.271122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:44.522 pt1 00:44:44.522 [2024-12-09 05:35:31.271307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:44:44.522 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 malloc2 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 [2024-12-09 05:35:31.321361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:44:44.523 [2024-12-09 05:35:31.321461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:44.523 [2024-12-09 05:35:31.321499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:44:44.523 [2024-12-09 05:35:31.321514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:44.523 [2024-12-09 05:35:31.324372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:44.523 [2024-12-09 05:35:31.324430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:44:44.523 pt2 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 malloc3 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 [2024-12-09 05:35:31.381086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:44:44.523 [2024-12-09 05:35:31.381204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:44.523 [2024-12-09 05:35:31.381238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:44:44.523 [2024-12-09 05:35:31.381254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:44.523 [2024-12-09 05:35:31.384194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:44.523 [2024-12-09 05:35:31.384237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:44:44.523 pt3 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 malloc4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 [2024-12-09 05:35:31.433764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:44:44.523 [2024-12-09 05:35:31.433885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:44.523 [2024-12-09 05:35:31.433917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:44:44.523 [2024-12-09 05:35:31.433932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:44.523 [2024-12-09 05:35:31.436754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:44.523 [2024-12-09 05:35:31.436825] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:44:44.523 pt4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 [2024-12-09 05:35:31.441840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:44:44.523 [2024-12-09 05:35:31.444476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:44:44.523 [2024-12-09 05:35:31.444730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:44:44.523 [2024-12-09 05:35:31.444989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:44:44.523 [2024-12-09 05:35:31.445296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:44:44.523 [2024-12-09 05:35:31.445318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:44.523 [2024-12-09 05:35:31.445658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:44:44.523 [2024-12-09 05:35:31.452316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:44:44.523 [2024-12-09 05:35:31.452470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:44:44.523 [2024-12-09 05:35:31.452969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:44.523 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.781 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:44.781 "name": "raid_bdev1", 00:44:44.781 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:44.781 "strip_size_kb": 64, 00:44:44.781 "state": "online", 00:44:44.781 "raid_level": "raid5f", 00:44:44.781 "superblock": true, 00:44:44.781 "num_base_bdevs": 4, 00:44:44.781 "num_base_bdevs_discovered": 4, 00:44:44.781 "num_base_bdevs_operational": 4, 00:44:44.781 "base_bdevs_list": [ 00:44:44.781 { 00:44:44.781 "name": "pt1", 00:44:44.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:44:44.781 "is_configured": true, 00:44:44.781 "data_offset": 2048, 00:44:44.781 "data_size": 63488 00:44:44.781 }, 00:44:44.781 { 00:44:44.781 "name": "pt2", 00:44:44.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:44.781 "is_configured": true, 00:44:44.781 "data_offset": 2048, 00:44:44.781 "data_size": 63488 00:44:44.781 }, 00:44:44.781 { 00:44:44.781 "name": "pt3", 00:44:44.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:44.781 "is_configured": true, 00:44:44.781 "data_offset": 2048, 00:44:44.781 "data_size": 63488 00:44:44.781 }, 00:44:44.781 { 00:44:44.781 "name": "pt4", 00:44:44.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:44.781 "is_configured": true, 00:44:44.781 "data_offset": 2048, 00:44:44.781 "data_size": 63488 00:44:44.781 } 00:44:44.781 ] 00:44:44.781 }' 00:44:44.781 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:44.781 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:44:45.040 [2024-12-09 05:35:31.937033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.040 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:44:45.040 "name": "raid_bdev1", 00:44:45.040 "aliases": [ 00:44:45.040 "bea2b507-ac6c-4ca2-ab01-4598b55a8b26" 00:44:45.040 ], 00:44:45.040 "product_name": "Raid Volume", 00:44:45.040 "block_size": 512, 00:44:45.040 "num_blocks": 190464, 00:44:45.040 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:45.040 "assigned_rate_limits": { 00:44:45.040 "rw_ios_per_sec": 0, 00:44:45.040 "rw_mbytes_per_sec": 0, 00:44:45.040 "r_mbytes_per_sec": 0, 00:44:45.040 "w_mbytes_per_sec": 0 00:44:45.040 }, 00:44:45.040 "claimed": false, 00:44:45.040 "zoned": false, 00:44:45.040 "supported_io_types": { 00:44:45.040 "read": true, 00:44:45.040 "write": true, 00:44:45.040 "unmap": false, 00:44:45.040 "flush": false, 00:44:45.040 "reset": true, 00:44:45.040 "nvme_admin": false, 00:44:45.040 "nvme_io": false, 00:44:45.040 "nvme_io_md": false, 00:44:45.040 "write_zeroes": true, 00:44:45.040 "zcopy": false, 00:44:45.041 "get_zone_info": false, 00:44:45.041 "zone_management": false, 00:44:45.041 "zone_append": false, 00:44:45.041 "compare": false, 00:44:45.041 "compare_and_write": false, 00:44:45.041 "abort": false, 00:44:45.041 "seek_hole": false, 00:44:45.041 "seek_data": false, 00:44:45.041 "copy": false, 00:44:45.041 "nvme_iov_md": false 00:44:45.041 }, 00:44:45.041 "driver_specific": { 00:44:45.041 "raid": { 00:44:45.041 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:45.041 "strip_size_kb": 64, 00:44:45.041 "state": "online", 00:44:45.041 "raid_level": "raid5f", 00:44:45.041 "superblock": true, 00:44:45.041 "num_base_bdevs": 4, 00:44:45.041 "num_base_bdevs_discovered": 4, 00:44:45.041 "num_base_bdevs_operational": 4, 00:44:45.041 "base_bdevs_list": [ 00:44:45.041 { 00:44:45.041 "name": "pt1", 00:44:45.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:44:45.041 "is_configured": true, 00:44:45.041 "data_offset": 2048, 00:44:45.041 "data_size": 63488 00:44:45.041 }, 00:44:45.041 { 00:44:45.041 "name": "pt2", 00:44:45.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:45.041 "is_configured": true, 00:44:45.041 "data_offset": 2048, 00:44:45.041 "data_size": 63488 00:44:45.041 }, 00:44:45.041 { 00:44:45.041 "name": "pt3", 00:44:45.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:45.041 "is_configured": true, 00:44:45.041 "data_offset": 2048, 00:44:45.041 "data_size": 63488 00:44:45.041 }, 00:44:45.041 { 00:44:45.041 "name": "pt4", 00:44:45.041 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:45.041 "is_configured": true, 00:44:45.041 "data_offset": 2048, 00:44:45.041 "data_size": 63488 00:44:45.041 } 00:44:45.041 ] 00:44:45.041 } 00:44:45.041 } 00:44:45.041 }' 00:44:45.041 05:35:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:44:45.299 pt2 00:44:45.299 pt3 00:44:45.299 pt4' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.299 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.558 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.558 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 [2024-12-09 05:35:32.317110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bea2b507-ac6c-4ca2-ab01-4598b55a8b26 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bea2b507-ac6c-4ca2-ab01-4598b55a8b26 ']' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 [2024-12-09 05:35:32.364849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:45.559 [2024-12-09 05:35:32.365027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:45.559 [2024-12-09 05:35:32.365239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:45.559 [2024-12-09 05:35:32.365486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:45.559 [2024-12-09 05:35:32.365639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.559 [2024-12-09 05:35:32.520904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:44:45.559 [2024-12-09 05:35:32.523730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:44:45.559 [2024-12-09 05:35:32.523826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:44:45.559 [2024-12-09 05:35:32.523920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:44:45.559 [2024-12-09 05:35:32.523996] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:44:45.559 [2024-12-09 05:35:32.524094] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:44:45.559 [2024-12-09 05:35:32.524128] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:44:45.559 [2024-12-09 05:35:32.524162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:44:45.559 [2024-12-09 05:35:32.524184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:45.559 [2024-12-09 05:35:32.524201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:44:45.559 request: 00:44:45.559 { 00:44:45.559 "name": "raid_bdev1", 00:44:45.559 "raid_level": "raid5f", 00:44:45.559 "base_bdevs": [ 00:44:45.559 "malloc1", 00:44:45.559 "malloc2", 00:44:45.559 "malloc3", 00:44:45.559 "malloc4" 00:44:45.559 ], 00:44:45.559 "strip_size_kb": 64, 00:44:45.559 "superblock": false, 00:44:45.559 "method": "bdev_raid_create", 00:44:45.559 "req_id": 1 00:44:45.559 } 00:44:45.559 Got JSON-RPC error response 00:44:45.559 response: 00:44:45.559 { 00:44:45.559 "code": -17, 00:44:45.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:44:45.559 } 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:45.559 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.873 [2024-12-09 05:35:32.580945] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:44:45.873 [2024-12-09 05:35:32.581156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:45.873 [2024-12-09 05:35:32.581222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:44:45.873 [2024-12-09 05:35:32.581460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:45.873 [2024-12-09 05:35:32.584509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:45.873 [2024-12-09 05:35:32.584726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:44:45.873 [2024-12-09 05:35:32.584933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:44:45.873 [2024-12-09 05:35:32.585102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:44:45.873 pt1 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.873 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:45.873 "name": "raid_bdev1", 00:44:45.873 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:45.873 "strip_size_kb": 64, 00:44:45.873 "state": "configuring", 00:44:45.873 "raid_level": "raid5f", 00:44:45.873 "superblock": true, 00:44:45.873 "num_base_bdevs": 4, 00:44:45.873 "num_base_bdevs_discovered": 1, 00:44:45.873 "num_base_bdevs_operational": 4, 00:44:45.873 "base_bdevs_list": [ 00:44:45.873 { 00:44:45.873 "name": "pt1", 00:44:45.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:44:45.873 "is_configured": true, 00:44:45.873 "data_offset": 2048, 00:44:45.873 "data_size": 63488 00:44:45.873 }, 00:44:45.873 { 00:44:45.874 "name": null, 00:44:45.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:45.874 "is_configured": false, 00:44:45.874 "data_offset": 2048, 00:44:45.874 "data_size": 63488 00:44:45.874 }, 00:44:45.874 { 00:44:45.874 "name": null, 00:44:45.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:45.874 "is_configured": false, 00:44:45.874 "data_offset": 2048, 00:44:45.874 "data_size": 63488 00:44:45.874 }, 00:44:45.874 { 00:44:45.874 "name": null, 00:44:45.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:45.874 "is_configured": false, 00:44:45.874 "data_offset": 2048, 00:44:45.874 "data_size": 63488 00:44:45.874 } 00:44:45.874 ] 00:44:45.874 }' 00:44:45.874 05:35:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:45.874 05:35:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.132 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:44:46.132 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:44:46.132 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.132 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.391 [2024-12-09 05:35:33.109279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:44:46.391 [2024-12-09 05:35:33.109556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:46.391 [2024-12-09 05:35:33.109630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:44:46.391 [2024-12-09 05:35:33.109660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:46.391 [2024-12-09 05:35:33.110378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:46.391 [2024-12-09 05:35:33.110413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:44:46.391 [2024-12-09 05:35:33.110601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:44:46.391 [2024-12-09 05:35:33.110650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:44:46.391 pt2 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.391 [2024-12-09 05:35:33.117238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:46.391 "name": "raid_bdev1", 00:44:46.391 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:46.391 "strip_size_kb": 64, 00:44:46.391 "state": "configuring", 00:44:46.391 "raid_level": "raid5f", 00:44:46.391 "superblock": true, 00:44:46.391 "num_base_bdevs": 4, 00:44:46.391 "num_base_bdevs_discovered": 1, 00:44:46.391 "num_base_bdevs_operational": 4, 00:44:46.391 "base_bdevs_list": [ 00:44:46.391 { 00:44:46.391 "name": "pt1", 00:44:46.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:44:46.391 "is_configured": true, 00:44:46.391 "data_offset": 2048, 00:44:46.391 "data_size": 63488 00:44:46.391 }, 00:44:46.391 { 00:44:46.391 "name": null, 00:44:46.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:46.391 "is_configured": false, 00:44:46.391 "data_offset": 0, 00:44:46.391 "data_size": 63488 00:44:46.391 }, 00:44:46.391 { 00:44:46.391 "name": null, 00:44:46.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:46.391 "is_configured": false, 00:44:46.391 "data_offset": 2048, 00:44:46.391 "data_size": 63488 00:44:46.391 }, 00:44:46.391 { 00:44:46.391 "name": null, 00:44:46.391 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:46.391 "is_configured": false, 00:44:46.391 "data_offset": 2048, 00:44:46.391 "data_size": 63488 00:44:46.391 } 00:44:46.391 ] 00:44:46.391 }' 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:46.391 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.958 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:44:46.958 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:44:46.958 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:44:46.958 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.958 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.958 [2024-12-09 05:35:33.645392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:44:46.958 [2024-12-09 05:35:33.645666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:46.958 [2024-12-09 05:35:33.645742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:44:46.958 [2024-12-09 05:35:33.645949] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:46.958 [2024-12-09 05:35:33.646647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:46.958 [2024-12-09 05:35:33.646679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:44:46.958 [2024-12-09 05:35:33.646820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:44:46.959 [2024-12-09 05:35:33.646854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:44:46.959 pt2 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.959 [2024-12-09 05:35:33.653332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:44:46.959 [2024-12-09 05:35:33.653559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:46.959 [2024-12-09 05:35:33.653636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:44:46.959 [2024-12-09 05:35:33.653788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:46.959 [2024-12-09 05:35:33.654319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:46.959 [2024-12-09 05:35:33.654496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:44:46.959 [2024-12-09 05:35:33.654749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:44:46.959 [2024-12-09 05:35:33.654921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:44:46.959 pt3 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.959 [2024-12-09 05:35:33.661316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:44:46.959 [2024-12-09 05:35:33.661526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:46.959 [2024-12-09 05:35:33.661562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:44:46.959 [2024-12-09 05:35:33.661577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:46.959 [2024-12-09 05:35:33.662184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:46.959 [2024-12-09 05:35:33.662215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:44:46.959 [2024-12-09 05:35:33.662309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:44:46.959 [2024-12-09 05:35:33.662339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:44:46.959 [2024-12-09 05:35:33.662504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:44:46.959 [2024-12-09 05:35:33.662564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:46.959 [2024-12-09 05:35:33.662890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:44:46.959 [2024-12-09 05:35:33.669449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:44:46.959 [2024-12-09 05:35:33.669478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:44:46.959 [2024-12-09 05:35:33.669707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:46.959 pt4 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:46.959 "name": "raid_bdev1", 00:44:46.959 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:46.959 "strip_size_kb": 64, 00:44:46.959 "state": "online", 00:44:46.959 "raid_level": "raid5f", 00:44:46.959 "superblock": true, 00:44:46.959 "num_base_bdevs": 4, 00:44:46.959 "num_base_bdevs_discovered": 4, 00:44:46.959 "num_base_bdevs_operational": 4, 00:44:46.959 "base_bdevs_list": [ 00:44:46.959 { 00:44:46.959 "name": "pt1", 00:44:46.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:44:46.959 "is_configured": true, 00:44:46.959 "data_offset": 2048, 00:44:46.959 "data_size": 63488 00:44:46.959 }, 00:44:46.959 { 00:44:46.959 "name": "pt2", 00:44:46.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:46.959 "is_configured": true, 00:44:46.959 "data_offset": 2048, 00:44:46.959 "data_size": 63488 00:44:46.959 }, 00:44:46.959 { 00:44:46.959 "name": "pt3", 00:44:46.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:46.959 "is_configured": true, 00:44:46.959 "data_offset": 2048, 00:44:46.959 "data_size": 63488 00:44:46.959 }, 00:44:46.959 { 00:44:46.959 "name": "pt4", 00:44:46.959 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:46.959 "is_configured": true, 00:44:46.959 "data_offset": 2048, 00:44:46.959 "data_size": 63488 00:44:46.959 } 00:44:46.959 ] 00:44:46.959 }' 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:46.959 05:35:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:44:47.528 [2024-12-09 05:35:34.233888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:44:47.528 "name": "raid_bdev1", 00:44:47.528 "aliases": [ 00:44:47.528 "bea2b507-ac6c-4ca2-ab01-4598b55a8b26" 00:44:47.528 ], 00:44:47.528 "product_name": "Raid Volume", 00:44:47.528 "block_size": 512, 00:44:47.528 "num_blocks": 190464, 00:44:47.528 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:47.528 "assigned_rate_limits": { 00:44:47.528 "rw_ios_per_sec": 0, 00:44:47.528 "rw_mbytes_per_sec": 0, 00:44:47.528 "r_mbytes_per_sec": 0, 00:44:47.528 "w_mbytes_per_sec": 0 00:44:47.528 }, 00:44:47.528 "claimed": false, 00:44:47.528 "zoned": false, 00:44:47.528 "supported_io_types": { 00:44:47.528 "read": true, 00:44:47.528 "write": true, 00:44:47.528 "unmap": false, 00:44:47.528 "flush": false, 00:44:47.528 "reset": true, 00:44:47.528 "nvme_admin": false, 00:44:47.528 "nvme_io": false, 00:44:47.528 "nvme_io_md": false, 00:44:47.528 "write_zeroes": true, 00:44:47.528 "zcopy": false, 00:44:47.528 "get_zone_info": false, 00:44:47.528 "zone_management": false, 00:44:47.528 "zone_append": false, 00:44:47.528 "compare": false, 00:44:47.528 "compare_and_write": false, 00:44:47.528 "abort": false, 00:44:47.528 "seek_hole": false, 00:44:47.528 "seek_data": false, 00:44:47.528 "copy": false, 00:44:47.528 "nvme_iov_md": false 00:44:47.528 }, 00:44:47.528 "driver_specific": { 00:44:47.528 "raid": { 00:44:47.528 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:47.528 "strip_size_kb": 64, 00:44:47.528 "state": "online", 00:44:47.528 "raid_level": "raid5f", 00:44:47.528 "superblock": true, 00:44:47.528 "num_base_bdevs": 4, 00:44:47.528 "num_base_bdevs_discovered": 4, 00:44:47.528 "num_base_bdevs_operational": 4, 00:44:47.528 "base_bdevs_list": [ 00:44:47.528 { 00:44:47.528 "name": "pt1", 00:44:47.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:44:47.528 "is_configured": true, 00:44:47.528 "data_offset": 2048, 00:44:47.528 "data_size": 63488 00:44:47.528 }, 00:44:47.528 { 00:44:47.528 "name": "pt2", 00:44:47.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:47.528 "is_configured": true, 00:44:47.528 "data_offset": 2048, 00:44:47.528 "data_size": 63488 00:44:47.528 }, 00:44:47.528 { 00:44:47.528 "name": "pt3", 00:44:47.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:47.528 "is_configured": true, 00:44:47.528 "data_offset": 2048, 00:44:47.528 "data_size": 63488 00:44:47.528 }, 00:44:47.528 { 00:44:47.528 "name": "pt4", 00:44:47.528 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:47.528 "is_configured": true, 00:44:47.528 "data_offset": 2048, 00:44:47.528 "data_size": 63488 00:44:47.528 } 00:44:47.528 ] 00:44:47.528 } 00:44:47.528 } 00:44:47.528 }' 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:44:47.528 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:44:47.529 pt2 00:44:47.529 pt3 00:44:47.529 pt4' 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:47.529 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.788 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.789 [2024-12-09 05:35:34.645911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bea2b507-ac6c-4ca2-ab01-4598b55a8b26 '!=' bea2b507-ac6c-4ca2-ab01-4598b55a8b26 ']' 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.789 [2024-12-09 05:35:34.701690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:47.789 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.047 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:48.047 "name": "raid_bdev1", 00:44:48.047 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:48.047 "strip_size_kb": 64, 00:44:48.047 "state": "online", 00:44:48.047 "raid_level": "raid5f", 00:44:48.047 "superblock": true, 00:44:48.047 "num_base_bdevs": 4, 00:44:48.047 "num_base_bdevs_discovered": 3, 00:44:48.047 "num_base_bdevs_operational": 3, 00:44:48.047 "base_bdevs_list": [ 00:44:48.047 { 00:44:48.047 "name": null, 00:44:48.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:48.047 "is_configured": false, 00:44:48.047 "data_offset": 0, 00:44:48.047 "data_size": 63488 00:44:48.047 }, 00:44:48.047 { 00:44:48.047 "name": "pt2", 00:44:48.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:48.047 "is_configured": true, 00:44:48.047 "data_offset": 2048, 00:44:48.047 "data_size": 63488 00:44:48.047 }, 00:44:48.047 { 00:44:48.047 "name": "pt3", 00:44:48.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:48.047 "is_configured": true, 00:44:48.047 "data_offset": 2048, 00:44:48.047 "data_size": 63488 00:44:48.047 }, 00:44:48.047 { 00:44:48.047 "name": "pt4", 00:44:48.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:48.047 "is_configured": true, 00:44:48.047 "data_offset": 2048, 00:44:48.047 "data_size": 63488 00:44:48.047 } 00:44:48.047 ] 00:44:48.047 }' 00:44:48.047 05:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:48.047 05:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.308 [2024-12-09 05:35:35.257893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:48.308 [2024-12-09 05:35:35.257936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:48.308 [2024-12-09 05:35:35.258055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:48.308 [2024-12-09 05:35:35.258212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:48.308 [2024-12-09 05:35:35.258228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.308 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.567 [2024-12-09 05:35:35.345822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:44:48.567 [2024-12-09 05:35:35.345881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:48.567 [2024-12-09 05:35:35.345909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:44:48.567 [2024-12-09 05:35:35.345923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:48.567 [2024-12-09 05:35:35.348970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:48.567 [2024-12-09 05:35:35.349011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:44:48.567 [2024-12-09 05:35:35.349115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:44:48.567 [2024-12-09 05:35:35.349171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:44:48.567 pt2 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:48.567 "name": "raid_bdev1", 00:44:48.567 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:48.567 "strip_size_kb": 64, 00:44:48.567 "state": "configuring", 00:44:48.567 "raid_level": "raid5f", 00:44:48.567 "superblock": true, 00:44:48.567 "num_base_bdevs": 4, 00:44:48.567 "num_base_bdevs_discovered": 1, 00:44:48.567 "num_base_bdevs_operational": 3, 00:44:48.567 "base_bdevs_list": [ 00:44:48.567 { 00:44:48.567 "name": null, 00:44:48.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:48.567 "is_configured": false, 00:44:48.567 "data_offset": 2048, 00:44:48.567 "data_size": 63488 00:44:48.567 }, 00:44:48.567 { 00:44:48.567 "name": "pt2", 00:44:48.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:48.567 "is_configured": true, 00:44:48.567 "data_offset": 2048, 00:44:48.567 "data_size": 63488 00:44:48.567 }, 00:44:48.567 { 00:44:48.567 "name": null, 00:44:48.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:48.567 "is_configured": false, 00:44:48.567 "data_offset": 2048, 00:44:48.567 "data_size": 63488 00:44:48.567 }, 00:44:48.567 { 00:44:48.567 "name": null, 00:44:48.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:48.567 "is_configured": false, 00:44:48.567 "data_offset": 2048, 00:44:48.567 "data_size": 63488 00:44:48.567 } 00:44:48.567 ] 00:44:48.567 }' 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:48.567 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:49.133 [2024-12-09 05:35:35.902065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:44:49.133 [2024-12-09 05:35:35.902416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:49.133 [2024-12-09 05:35:35.902464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:44:49.133 [2024-12-09 05:35:35.902481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:49.133 [2024-12-09 05:35:35.903144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:49.133 [2024-12-09 05:35:35.903281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:44:49.133 [2024-12-09 05:35:35.903427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:44:49.133 [2024-12-09 05:35:35.903463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:44:49.133 pt3 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:49.133 "name": "raid_bdev1", 00:44:49.133 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:49.133 "strip_size_kb": 64, 00:44:49.133 "state": "configuring", 00:44:49.133 "raid_level": "raid5f", 00:44:49.133 "superblock": true, 00:44:49.133 "num_base_bdevs": 4, 00:44:49.133 "num_base_bdevs_discovered": 2, 00:44:49.133 "num_base_bdevs_operational": 3, 00:44:49.133 "base_bdevs_list": [ 00:44:49.133 { 00:44:49.133 "name": null, 00:44:49.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:49.133 "is_configured": false, 00:44:49.133 "data_offset": 2048, 00:44:49.133 "data_size": 63488 00:44:49.133 }, 00:44:49.133 { 00:44:49.133 "name": "pt2", 00:44:49.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:49.133 "is_configured": true, 00:44:49.133 "data_offset": 2048, 00:44:49.133 "data_size": 63488 00:44:49.133 }, 00:44:49.133 { 00:44:49.133 "name": "pt3", 00:44:49.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:49.133 "is_configured": true, 00:44:49.133 "data_offset": 2048, 00:44:49.133 "data_size": 63488 00:44:49.133 }, 00:44:49.133 { 00:44:49.133 "name": null, 00:44:49.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:49.133 "is_configured": false, 00:44:49.133 "data_offset": 2048, 00:44:49.133 "data_size": 63488 00:44:49.133 } 00:44:49.133 ] 00:44:49.133 }' 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:49.133 05:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:49.699 [2024-12-09 05:35:36.458265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:44:49.699 [2024-12-09 05:35:36.458558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:49.699 [2024-12-09 05:35:36.458742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:44:49.699 [2024-12-09 05:35:36.458783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:49.699 [2024-12-09 05:35:36.459434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:49.699 [2024-12-09 05:35:36.459458] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:44:49.699 [2024-12-09 05:35:36.459568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:44:49.699 [2024-12-09 05:35:36.459606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:44:49.699 [2024-12-09 05:35:36.459766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:44:49.699 [2024-12-09 05:35:36.459797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:49.699 [2024-12-09 05:35:36.460339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:44:49.699 [2024-12-09 05:35:36.466808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:44:49.699 [2024-12-09 05:35:36.467010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:44:49.699 [2024-12-09 05:35:36.467571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:49.699 pt4 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.699 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:49.699 "name": "raid_bdev1", 00:44:49.699 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:49.699 "strip_size_kb": 64, 00:44:49.699 "state": "online", 00:44:49.699 "raid_level": "raid5f", 00:44:49.699 "superblock": true, 00:44:49.699 "num_base_bdevs": 4, 00:44:49.699 "num_base_bdevs_discovered": 3, 00:44:49.699 "num_base_bdevs_operational": 3, 00:44:49.699 "base_bdevs_list": [ 00:44:49.699 { 00:44:49.699 "name": null, 00:44:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:49.699 "is_configured": false, 00:44:49.699 "data_offset": 2048, 00:44:49.699 "data_size": 63488 00:44:49.699 }, 00:44:49.699 { 00:44:49.699 "name": "pt2", 00:44:49.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:49.699 "is_configured": true, 00:44:49.699 "data_offset": 2048, 00:44:49.699 "data_size": 63488 00:44:49.699 }, 00:44:49.699 { 00:44:49.699 "name": "pt3", 00:44:49.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:49.700 "is_configured": true, 00:44:49.700 "data_offset": 2048, 00:44:49.700 "data_size": 63488 00:44:49.700 }, 00:44:49.700 { 00:44:49.700 "name": "pt4", 00:44:49.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:49.700 "is_configured": true, 00:44:49.700 "data_offset": 2048, 00:44:49.700 "data_size": 63488 00:44:49.700 } 00:44:49.700 ] 00:44:49.700 }' 00:44:49.700 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:49.700 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.266 [2024-12-09 05:35:36.967452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:50.266 [2024-12-09 05:35:36.967654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:50.266 [2024-12-09 05:35:36.967838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:50.266 [2024-12-09 05:35:36.967982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:50.266 [2024-12-09 05:35:36.968011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:44:50.266 05:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.266 [2024-12-09 05:35:37.039434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:44:50.266 [2024-12-09 05:35:37.039638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:50.266 [2024-12-09 05:35:37.039847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:44:50.266 [2024-12-09 05:35:37.039881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:50.266 [2024-12-09 05:35:37.043453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:50.266 [2024-12-09 05:35:37.043530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:44:50.266 [2024-12-09 05:35:37.043642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:44:50.266 [2024-12-09 05:35:37.043701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:44:50.266 [2024-12-09 05:35:37.043995] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:44:50.266 [2024-12-09 05:35:37.044089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:50.266 pt1 00:44:50.266 [2024-12-09 05:35:37.044318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.266 [2024-12-09 05:35:37.044521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:44:50.266 [2024-12-09 05:35:37.044691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:50.266 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:50.267 "name": "raid_bdev1", 00:44:50.267 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:50.267 "strip_size_kb": 64, 00:44:50.267 "state": "configuring", 00:44:50.267 "raid_level": "raid5f", 00:44:50.267 "superblock": true, 00:44:50.267 "num_base_bdevs": 4, 00:44:50.267 "num_base_bdevs_discovered": 2, 00:44:50.267 "num_base_bdevs_operational": 3, 00:44:50.267 "base_bdevs_list": [ 00:44:50.267 { 00:44:50.267 "name": null, 00:44:50.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:50.267 "is_configured": false, 00:44:50.267 "data_offset": 2048, 00:44:50.267 "data_size": 63488 00:44:50.267 }, 00:44:50.267 { 00:44:50.267 "name": "pt2", 00:44:50.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:50.267 "is_configured": true, 00:44:50.267 "data_offset": 2048, 00:44:50.267 "data_size": 63488 00:44:50.267 }, 00:44:50.267 { 00:44:50.267 "name": "pt3", 00:44:50.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:50.267 "is_configured": true, 00:44:50.267 "data_offset": 2048, 00:44:50.267 "data_size": 63488 00:44:50.267 }, 00:44:50.267 { 00:44:50.267 "name": null, 00:44:50.267 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:50.267 "is_configured": false, 00:44:50.267 "data_offset": 2048, 00:44:50.267 "data_size": 63488 00:44:50.267 } 00:44:50.267 ] 00:44:50.267 }' 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:50.267 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.832 [2024-12-09 05:35:37.587945] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:44:50.832 [2024-12-09 05:35:37.588160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:50.832 [2024-12-09 05:35:37.588348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:44:50.832 [2024-12-09 05:35:37.588468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:50.832 [2024-12-09 05:35:37.589238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:50.832 [2024-12-09 05:35:37.589270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:44:50.832 [2024-12-09 05:35:37.589409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:44:50.832 [2024-12-09 05:35:37.589448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:44:50.832 [2024-12-09 05:35:37.589625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:44:50.832 [2024-12-09 05:35:37.589648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:50.832 [2024-12-09 05:35:37.590038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:44:50.832 pt4 00:44:50.832 [2024-12-09 05:35:37.597613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:44:50.832 [2024-12-09 05:35:37.597642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:44:50.832 [2024-12-09 05:35:37.598034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.832 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:50.832 "name": "raid_bdev1", 00:44:50.832 "uuid": "bea2b507-ac6c-4ca2-ab01-4598b55a8b26", 00:44:50.832 "strip_size_kb": 64, 00:44:50.832 "state": "online", 00:44:50.832 "raid_level": "raid5f", 00:44:50.832 "superblock": true, 00:44:50.832 "num_base_bdevs": 4, 00:44:50.832 "num_base_bdevs_discovered": 3, 00:44:50.832 "num_base_bdevs_operational": 3, 00:44:50.832 "base_bdevs_list": [ 00:44:50.832 { 00:44:50.832 "name": null, 00:44:50.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:50.832 "is_configured": false, 00:44:50.832 "data_offset": 2048, 00:44:50.832 "data_size": 63488 00:44:50.832 }, 00:44:50.832 { 00:44:50.832 "name": "pt2", 00:44:50.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:44:50.833 "is_configured": true, 00:44:50.833 "data_offset": 2048, 00:44:50.833 "data_size": 63488 00:44:50.833 }, 00:44:50.833 { 00:44:50.833 "name": "pt3", 00:44:50.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:44:50.833 "is_configured": true, 00:44:50.833 "data_offset": 2048, 00:44:50.833 "data_size": 63488 00:44:50.833 }, 00:44:50.833 { 00:44:50.833 "name": "pt4", 00:44:50.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:44:50.833 "is_configured": true, 00:44:50.833 "data_offset": 2048, 00:44:50.833 "data_size": 63488 00:44:50.833 } 00:44:50.833 ] 00:44:50.833 }' 00:44:50.833 05:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:50.833 05:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:51.397 [2024-12-09 05:35:38.230657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bea2b507-ac6c-4ca2-ab01-4598b55a8b26 '!=' bea2b507-ac6c-4ca2-ab01-4598b55a8b26 ']' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84640 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84640 ']' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84640 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84640 00:44:51.397 killing process with pid 84640 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84640' 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84640 00:44:51.397 [2024-12-09 05:35:38.304652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:51.397 [2024-12-09 05:35:38.304801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:51.397 05:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84640 00:44:51.397 [2024-12-09 05:35:38.304921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:51.397 [2024-12-09 05:35:38.304956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:44:51.961 [2024-12-09 05:35:38.698120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:53.332 05:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:44:53.332 00:44:53.332 real 0m9.720s 00:44:53.332 user 0m15.822s 00:44:53.332 sys 0m1.382s 00:44:53.332 05:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:53.332 ************************************ 00:44:53.332 END TEST raid5f_superblock_test 00:44:53.332 ************************************ 00:44:53.332 05:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:44:53.332 05:35:39 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:44:53.332 05:35:39 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:44:53.332 05:35:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:44:53.332 05:35:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:53.332 05:35:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:44:53.332 ************************************ 00:44:53.332 START TEST raid5f_rebuild_test 00:44:53.332 ************************************ 00:44:53.332 05:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:44:53.332 05:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:44:53.332 05:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:44:53.332 05:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:44:53.332 05:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85131 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85131 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85131 ']' 00:44:53.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:53.332 05:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:53.332 I/O size of 3145728 is greater than zero copy threshold (65536). 00:44:53.332 Zero copy mechanism will not be used. 00:44:53.332 [2024-12-09 05:35:40.125040] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:44:53.332 [2024-12-09 05:35:40.125269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85131 ] 00:44:53.590 [2024-12-09 05:35:40.318084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:53.590 [2024-12-09 05:35:40.455698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:53.847 [2024-12-09 05:35:40.681512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:53.847 [2024-12-09 05:35:40.681556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 BaseBdev1_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 [2024-12-09 05:35:41.154371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:44:54.417 [2024-12-09 05:35:41.154615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:54.417 [2024-12-09 05:35:41.154696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:44:54.417 [2024-12-09 05:35:41.154876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:54.417 [2024-12-09 05:35:41.158012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:54.417 [2024-12-09 05:35:41.158092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:44:54.417 BaseBdev1 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 BaseBdev2_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 [2024-12-09 05:35:41.210303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:44:54.417 [2024-12-09 05:35:41.210570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:54.417 [2024-12-09 05:35:41.210615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:44:54.417 [2024-12-09 05:35:41.210635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:54.417 BaseBdev2 00:44:54.417 [2024-12-09 05:35:41.214134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:54.417 [2024-12-09 05:35:41.214189] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 BaseBdev3_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 [2024-12-09 05:35:41.281333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:44:54.417 [2024-12-09 05:35:41.281632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:54.417 [2024-12-09 05:35:41.281710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:44:54.417 [2024-12-09 05:35:41.281978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:54.417 BaseBdev3 00:44:54.417 [2024-12-09 05:35:41.285234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:54.417 [2024-12-09 05:35:41.285313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 BaseBdev4_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 [2024-12-09 05:35:41.337462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:44:54.417 [2024-12-09 05:35:41.337746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:54.417 [2024-12-09 05:35:41.337842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:44:54.417 [2024-12-09 05:35:41.338069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:54.417 BaseBdev4 00:44:54.417 [2024-12-09 05:35:41.341332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:54.417 [2024-12-09 05:35:41.341415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.417 spare_malloc 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.417 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.676 spare_delay 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.676 [2024-12-09 05:35:41.401907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:54.676 [2024-12-09 05:35:41.401974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:54.676 [2024-12-09 05:35:41.402005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:44:54.676 [2024-12-09 05:35:41.402038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:54.676 [2024-12-09 05:35:41.405322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:54.676 [2024-12-09 05:35:41.405389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:54.676 spare 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.676 [2024-12-09 05:35:41.410149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:54.676 [2024-12-09 05:35:41.413213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:54.676 [2024-12-09 05:35:41.413551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:54.676 [2024-12-09 05:35:41.413681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:54.676 [2024-12-09 05:35:41.413832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:44:54.676 [2024-12-09 05:35:41.413878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:44:54.676 [2024-12-09 05:35:41.414319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:44:54.676 [2024-12-09 05:35:41.421976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:44:54.676 [2024-12-09 05:35:41.422015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:44:54.676 [2024-12-09 05:35:41.422303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:54.676 "name": "raid_bdev1", 00:44:54.676 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:44:54.676 "strip_size_kb": 64, 00:44:54.676 "state": "online", 00:44:54.676 "raid_level": "raid5f", 00:44:54.676 "superblock": false, 00:44:54.676 "num_base_bdevs": 4, 00:44:54.676 "num_base_bdevs_discovered": 4, 00:44:54.676 "num_base_bdevs_operational": 4, 00:44:54.676 "base_bdevs_list": [ 00:44:54.676 { 00:44:54.676 "name": "BaseBdev1", 00:44:54.676 "uuid": "eeb6147c-a3d2-52a1-bf9e-d9edc8899568", 00:44:54.676 "is_configured": true, 00:44:54.676 "data_offset": 0, 00:44:54.676 "data_size": 65536 00:44:54.676 }, 00:44:54.676 { 00:44:54.676 "name": "BaseBdev2", 00:44:54.676 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:44:54.676 "is_configured": true, 00:44:54.676 "data_offset": 0, 00:44:54.676 "data_size": 65536 00:44:54.676 }, 00:44:54.676 { 00:44:54.676 "name": "BaseBdev3", 00:44:54.676 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:44:54.676 "is_configured": true, 00:44:54.676 "data_offset": 0, 00:44:54.676 "data_size": 65536 00:44:54.676 }, 00:44:54.676 { 00:44:54.676 "name": "BaseBdev4", 00:44:54.676 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:44:54.676 "is_configured": true, 00:44:54.676 "data_offset": 0, 00:44:54.676 "data_size": 65536 00:44:54.676 } 00:44:54.676 ] 00:44:54.676 }' 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:54.676 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:55.243 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:55.243 05:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:44:55.243 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.243 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:55.243 [2024-12-09 05:35:41.971148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:55.243 05:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:55.243 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:44:55.502 [2024-12-09 05:35:42.366963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:44:55.502 /dev/nbd0 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:55.502 1+0 records in 00:44:55.502 1+0 records out 00:44:55.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373438 s, 11.0 MB/s 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:44:55.502 05:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:44:56.437 512+0 records in 00:44:56.437 512+0 records out 00:44:56.438 100663296 bytes (101 MB, 96 MiB) copied, 0.604407 s, 167 MB/s 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:56.438 [2024-12-09 05:35:43.333686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:56.438 [2024-12-09 05:35:43.346223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:56.438 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.696 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:56.696 "name": "raid_bdev1", 00:44:56.696 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:44:56.696 "strip_size_kb": 64, 00:44:56.696 "state": "online", 00:44:56.696 "raid_level": "raid5f", 00:44:56.696 "superblock": false, 00:44:56.696 "num_base_bdevs": 4, 00:44:56.696 "num_base_bdevs_discovered": 3, 00:44:56.696 "num_base_bdevs_operational": 3, 00:44:56.696 "base_bdevs_list": [ 00:44:56.696 { 00:44:56.696 "name": null, 00:44:56.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:56.696 "is_configured": false, 00:44:56.696 "data_offset": 0, 00:44:56.696 "data_size": 65536 00:44:56.696 }, 00:44:56.696 { 00:44:56.696 "name": "BaseBdev2", 00:44:56.696 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:44:56.696 "is_configured": true, 00:44:56.696 "data_offset": 0, 00:44:56.696 "data_size": 65536 00:44:56.696 }, 00:44:56.696 { 00:44:56.696 "name": "BaseBdev3", 00:44:56.696 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:44:56.696 "is_configured": true, 00:44:56.696 "data_offset": 0, 00:44:56.696 "data_size": 65536 00:44:56.696 }, 00:44:56.696 { 00:44:56.696 "name": "BaseBdev4", 00:44:56.696 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:44:56.696 "is_configured": true, 00:44:56.696 "data_offset": 0, 00:44:56.696 "data_size": 65536 00:44:56.696 } 00:44:56.696 ] 00:44:56.696 }' 00:44:56.696 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:56.696 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:56.955 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:44:56.955 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.955 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:56.955 [2024-12-09 05:35:43.862449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:56.955 [2024-12-09 05:35:43.877082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:44:56.955 05:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.955 05:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:44:56.955 [2024-12-09 05:35:43.886418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:58.330 "name": "raid_bdev1", 00:44:58.330 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:44:58.330 "strip_size_kb": 64, 00:44:58.330 "state": "online", 00:44:58.330 "raid_level": "raid5f", 00:44:58.330 "superblock": false, 00:44:58.330 "num_base_bdevs": 4, 00:44:58.330 "num_base_bdevs_discovered": 4, 00:44:58.330 "num_base_bdevs_operational": 4, 00:44:58.330 "process": { 00:44:58.330 "type": "rebuild", 00:44:58.330 "target": "spare", 00:44:58.330 "progress": { 00:44:58.330 "blocks": 17280, 00:44:58.330 "percent": 8 00:44:58.330 } 00:44:58.330 }, 00:44:58.330 "base_bdevs_list": [ 00:44:58.330 { 00:44:58.330 "name": "spare", 00:44:58.330 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:44:58.330 "is_configured": true, 00:44:58.330 "data_offset": 0, 00:44:58.330 "data_size": 65536 00:44:58.330 }, 00:44:58.330 { 00:44:58.330 "name": "BaseBdev2", 00:44:58.330 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:44:58.330 "is_configured": true, 00:44:58.330 "data_offset": 0, 00:44:58.330 "data_size": 65536 00:44:58.330 }, 00:44:58.330 { 00:44:58.330 "name": "BaseBdev3", 00:44:58.330 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:44:58.330 "is_configured": true, 00:44:58.330 "data_offset": 0, 00:44:58.330 "data_size": 65536 00:44:58.330 }, 00:44:58.330 { 00:44:58.330 "name": "BaseBdev4", 00:44:58.330 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:44:58.330 "is_configured": true, 00:44:58.330 "data_offset": 0, 00:44:58.330 "data_size": 65536 00:44:58.330 } 00:44:58.330 ] 00:44:58.330 }' 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:58.330 05:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:58.330 [2024-12-09 05:35:45.055969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:58.330 [2024-12-09 05:35:45.099425] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:58.330 [2024-12-09 05:35:45.099709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:58.330 [2024-12-09 05:35:45.099742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:58.330 [2024-12-09 05:35:45.099764] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:44:58.330 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:58.331 "name": "raid_bdev1", 00:44:58.331 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:44:58.331 "strip_size_kb": 64, 00:44:58.331 "state": "online", 00:44:58.331 "raid_level": "raid5f", 00:44:58.331 "superblock": false, 00:44:58.331 "num_base_bdevs": 4, 00:44:58.331 "num_base_bdevs_discovered": 3, 00:44:58.331 "num_base_bdevs_operational": 3, 00:44:58.331 "base_bdevs_list": [ 00:44:58.331 { 00:44:58.331 "name": null, 00:44:58.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:58.331 "is_configured": false, 00:44:58.331 "data_offset": 0, 00:44:58.331 "data_size": 65536 00:44:58.331 }, 00:44:58.331 { 00:44:58.331 "name": "BaseBdev2", 00:44:58.331 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:44:58.331 "is_configured": true, 00:44:58.331 "data_offset": 0, 00:44:58.331 "data_size": 65536 00:44:58.331 }, 00:44:58.331 { 00:44:58.331 "name": "BaseBdev3", 00:44:58.331 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:44:58.331 "is_configured": true, 00:44:58.331 "data_offset": 0, 00:44:58.331 "data_size": 65536 00:44:58.331 }, 00:44:58.331 { 00:44:58.331 "name": "BaseBdev4", 00:44:58.331 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:44:58.331 "is_configured": true, 00:44:58.331 "data_offset": 0, 00:44:58.331 "data_size": 65536 00:44:58.331 } 00:44:58.331 ] 00:44:58.331 }' 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:58.331 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:58.896 "name": "raid_bdev1", 00:44:58.896 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:44:58.896 "strip_size_kb": 64, 00:44:58.896 "state": "online", 00:44:58.896 "raid_level": "raid5f", 00:44:58.896 "superblock": false, 00:44:58.896 "num_base_bdevs": 4, 00:44:58.896 "num_base_bdevs_discovered": 3, 00:44:58.896 "num_base_bdevs_operational": 3, 00:44:58.896 "base_bdevs_list": [ 00:44:58.896 { 00:44:58.896 "name": null, 00:44:58.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:58.896 "is_configured": false, 00:44:58.896 "data_offset": 0, 00:44:58.896 "data_size": 65536 00:44:58.896 }, 00:44:58.896 { 00:44:58.896 "name": "BaseBdev2", 00:44:58.896 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:44:58.896 "is_configured": true, 00:44:58.896 "data_offset": 0, 00:44:58.896 "data_size": 65536 00:44:58.896 }, 00:44:58.896 { 00:44:58.896 "name": "BaseBdev3", 00:44:58.896 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:44:58.896 "is_configured": true, 00:44:58.896 "data_offset": 0, 00:44:58.896 "data_size": 65536 00:44:58.896 }, 00:44:58.896 { 00:44:58.896 "name": "BaseBdev4", 00:44:58.896 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:44:58.896 "is_configured": true, 00:44:58.896 "data_offset": 0, 00:44:58.896 "data_size": 65536 00:44:58.896 } 00:44:58.896 ] 00:44:58.896 }' 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.896 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:44:58.897 [2024-12-09 05:35:45.802035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:58.897 [2024-12-09 05:35:45.816164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:44:58.897 05:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.897 05:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:44:58.897 [2024-12-09 05:35:45.825497] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:00.271 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:00.271 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:00.272 "name": "raid_bdev1", 00:45:00.272 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:00.272 "strip_size_kb": 64, 00:45:00.272 "state": "online", 00:45:00.272 "raid_level": "raid5f", 00:45:00.272 "superblock": false, 00:45:00.272 "num_base_bdevs": 4, 00:45:00.272 "num_base_bdevs_discovered": 4, 00:45:00.272 "num_base_bdevs_operational": 4, 00:45:00.272 "process": { 00:45:00.272 "type": "rebuild", 00:45:00.272 "target": "spare", 00:45:00.272 "progress": { 00:45:00.272 "blocks": 17280, 00:45:00.272 "percent": 8 00:45:00.272 } 00:45:00.272 }, 00:45:00.272 "base_bdevs_list": [ 00:45:00.272 { 00:45:00.272 "name": "spare", 00:45:00.272 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 }, 00:45:00.272 { 00:45:00.272 "name": "BaseBdev2", 00:45:00.272 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 }, 00:45:00.272 { 00:45:00.272 "name": "BaseBdev3", 00:45:00.272 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 }, 00:45:00.272 { 00:45:00.272 "name": "BaseBdev4", 00:45:00.272 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 } 00:45:00.272 ] 00:45:00.272 }' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=682 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:00.272 05:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.272 05:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:00.272 "name": "raid_bdev1", 00:45:00.272 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:00.272 "strip_size_kb": 64, 00:45:00.272 "state": "online", 00:45:00.272 "raid_level": "raid5f", 00:45:00.272 "superblock": false, 00:45:00.272 "num_base_bdevs": 4, 00:45:00.272 "num_base_bdevs_discovered": 4, 00:45:00.272 "num_base_bdevs_operational": 4, 00:45:00.272 "process": { 00:45:00.272 "type": "rebuild", 00:45:00.272 "target": "spare", 00:45:00.272 "progress": { 00:45:00.272 "blocks": 21120, 00:45:00.272 "percent": 10 00:45:00.272 } 00:45:00.272 }, 00:45:00.272 "base_bdevs_list": [ 00:45:00.272 { 00:45:00.272 "name": "spare", 00:45:00.272 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 }, 00:45:00.272 { 00:45:00.272 "name": "BaseBdev2", 00:45:00.272 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 }, 00:45:00.272 { 00:45:00.272 "name": "BaseBdev3", 00:45:00.272 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 }, 00:45:00.272 { 00:45:00.272 "name": "BaseBdev4", 00:45:00.272 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:00.272 "is_configured": true, 00:45:00.272 "data_offset": 0, 00:45:00.272 "data_size": 65536 00:45:00.272 } 00:45:00.272 ] 00:45:00.272 }' 00:45:00.272 05:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:00.272 05:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:00.272 05:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:00.272 05:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:00.272 05:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.220 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:01.220 "name": "raid_bdev1", 00:45:01.220 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:01.220 "strip_size_kb": 64, 00:45:01.220 "state": "online", 00:45:01.220 "raid_level": "raid5f", 00:45:01.220 "superblock": false, 00:45:01.220 "num_base_bdevs": 4, 00:45:01.220 "num_base_bdevs_discovered": 4, 00:45:01.220 "num_base_bdevs_operational": 4, 00:45:01.220 "process": { 00:45:01.220 "type": "rebuild", 00:45:01.220 "target": "spare", 00:45:01.220 "progress": { 00:45:01.220 "blocks": 42240, 00:45:01.220 "percent": 21 00:45:01.220 } 00:45:01.220 }, 00:45:01.220 "base_bdevs_list": [ 00:45:01.220 { 00:45:01.220 "name": "spare", 00:45:01.220 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:01.220 "is_configured": true, 00:45:01.220 "data_offset": 0, 00:45:01.220 "data_size": 65536 00:45:01.220 }, 00:45:01.220 { 00:45:01.220 "name": "BaseBdev2", 00:45:01.220 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:01.220 "is_configured": true, 00:45:01.220 "data_offset": 0, 00:45:01.220 "data_size": 65536 00:45:01.220 }, 00:45:01.220 { 00:45:01.220 "name": "BaseBdev3", 00:45:01.220 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:01.220 "is_configured": true, 00:45:01.220 "data_offset": 0, 00:45:01.220 "data_size": 65536 00:45:01.220 }, 00:45:01.220 { 00:45:01.220 "name": "BaseBdev4", 00:45:01.220 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:01.220 "is_configured": true, 00:45:01.220 "data_offset": 0, 00:45:01.220 "data_size": 65536 00:45:01.220 } 00:45:01.220 ] 00:45:01.220 }' 00:45:01.478 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:01.478 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:01.478 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:01.478 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:01.478 05:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:02.435 "name": "raid_bdev1", 00:45:02.435 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:02.435 "strip_size_kb": 64, 00:45:02.435 "state": "online", 00:45:02.435 "raid_level": "raid5f", 00:45:02.435 "superblock": false, 00:45:02.435 "num_base_bdevs": 4, 00:45:02.435 "num_base_bdevs_discovered": 4, 00:45:02.435 "num_base_bdevs_operational": 4, 00:45:02.435 "process": { 00:45:02.435 "type": "rebuild", 00:45:02.435 "target": "spare", 00:45:02.435 "progress": { 00:45:02.435 "blocks": 65280, 00:45:02.435 "percent": 33 00:45:02.435 } 00:45:02.435 }, 00:45:02.435 "base_bdevs_list": [ 00:45:02.435 { 00:45:02.435 "name": "spare", 00:45:02.435 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:02.435 "is_configured": true, 00:45:02.435 "data_offset": 0, 00:45:02.435 "data_size": 65536 00:45:02.435 }, 00:45:02.435 { 00:45:02.435 "name": "BaseBdev2", 00:45:02.435 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:02.435 "is_configured": true, 00:45:02.435 "data_offset": 0, 00:45:02.435 "data_size": 65536 00:45:02.435 }, 00:45:02.435 { 00:45:02.435 "name": "BaseBdev3", 00:45:02.435 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:02.435 "is_configured": true, 00:45:02.435 "data_offset": 0, 00:45:02.435 "data_size": 65536 00:45:02.435 }, 00:45:02.435 { 00:45:02.435 "name": "BaseBdev4", 00:45:02.435 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:02.435 "is_configured": true, 00:45:02.435 "data_offset": 0, 00:45:02.435 "data_size": 65536 00:45:02.435 } 00:45:02.435 ] 00:45:02.435 }' 00:45:02.435 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:02.692 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:02.692 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:02.692 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:02.692 05:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:03.628 "name": "raid_bdev1", 00:45:03.628 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:03.628 "strip_size_kb": 64, 00:45:03.628 "state": "online", 00:45:03.628 "raid_level": "raid5f", 00:45:03.628 "superblock": false, 00:45:03.628 "num_base_bdevs": 4, 00:45:03.628 "num_base_bdevs_discovered": 4, 00:45:03.628 "num_base_bdevs_operational": 4, 00:45:03.628 "process": { 00:45:03.628 "type": "rebuild", 00:45:03.628 "target": "spare", 00:45:03.628 "progress": { 00:45:03.628 "blocks": 88320, 00:45:03.628 "percent": 44 00:45:03.628 } 00:45:03.628 }, 00:45:03.628 "base_bdevs_list": [ 00:45:03.628 { 00:45:03.628 "name": "spare", 00:45:03.628 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:03.628 "is_configured": true, 00:45:03.628 "data_offset": 0, 00:45:03.628 "data_size": 65536 00:45:03.628 }, 00:45:03.628 { 00:45:03.628 "name": "BaseBdev2", 00:45:03.628 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:03.628 "is_configured": true, 00:45:03.628 "data_offset": 0, 00:45:03.628 "data_size": 65536 00:45:03.628 }, 00:45:03.628 { 00:45:03.628 "name": "BaseBdev3", 00:45:03.628 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:03.628 "is_configured": true, 00:45:03.628 "data_offset": 0, 00:45:03.628 "data_size": 65536 00:45:03.628 }, 00:45:03.628 { 00:45:03.628 "name": "BaseBdev4", 00:45:03.628 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:03.628 "is_configured": true, 00:45:03.628 "data_offset": 0, 00:45:03.628 "data_size": 65536 00:45:03.628 } 00:45:03.628 ] 00:45:03.628 }' 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:03.628 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:03.886 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:03.886 05:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:04.820 "name": "raid_bdev1", 00:45:04.820 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:04.820 "strip_size_kb": 64, 00:45:04.820 "state": "online", 00:45:04.820 "raid_level": "raid5f", 00:45:04.820 "superblock": false, 00:45:04.820 "num_base_bdevs": 4, 00:45:04.820 "num_base_bdevs_discovered": 4, 00:45:04.820 "num_base_bdevs_operational": 4, 00:45:04.820 "process": { 00:45:04.820 "type": "rebuild", 00:45:04.820 "target": "spare", 00:45:04.820 "progress": { 00:45:04.820 "blocks": 109440, 00:45:04.820 "percent": 55 00:45:04.820 } 00:45:04.820 }, 00:45:04.820 "base_bdevs_list": [ 00:45:04.820 { 00:45:04.820 "name": "spare", 00:45:04.820 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:04.820 "is_configured": true, 00:45:04.820 "data_offset": 0, 00:45:04.820 "data_size": 65536 00:45:04.820 }, 00:45:04.820 { 00:45:04.820 "name": "BaseBdev2", 00:45:04.820 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:04.820 "is_configured": true, 00:45:04.820 "data_offset": 0, 00:45:04.820 "data_size": 65536 00:45:04.820 }, 00:45:04.820 { 00:45:04.820 "name": "BaseBdev3", 00:45:04.820 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:04.820 "is_configured": true, 00:45:04.820 "data_offset": 0, 00:45:04.820 "data_size": 65536 00:45:04.820 }, 00:45:04.820 { 00:45:04.820 "name": "BaseBdev4", 00:45:04.820 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:04.820 "is_configured": true, 00:45:04.820 "data_offset": 0, 00:45:04.820 "data_size": 65536 00:45:04.820 } 00:45:04.820 ] 00:45:04.820 }' 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:04.820 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:05.077 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:05.077 05:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:06.012 "name": "raid_bdev1", 00:45:06.012 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:06.012 "strip_size_kb": 64, 00:45:06.012 "state": "online", 00:45:06.012 "raid_level": "raid5f", 00:45:06.012 "superblock": false, 00:45:06.012 "num_base_bdevs": 4, 00:45:06.012 "num_base_bdevs_discovered": 4, 00:45:06.012 "num_base_bdevs_operational": 4, 00:45:06.012 "process": { 00:45:06.012 "type": "rebuild", 00:45:06.012 "target": "spare", 00:45:06.012 "progress": { 00:45:06.012 "blocks": 132480, 00:45:06.012 "percent": 67 00:45:06.012 } 00:45:06.012 }, 00:45:06.012 "base_bdevs_list": [ 00:45:06.012 { 00:45:06.012 "name": "spare", 00:45:06.012 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:06.012 "is_configured": true, 00:45:06.012 "data_offset": 0, 00:45:06.012 "data_size": 65536 00:45:06.012 }, 00:45:06.012 { 00:45:06.012 "name": "BaseBdev2", 00:45:06.012 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:06.012 "is_configured": true, 00:45:06.012 "data_offset": 0, 00:45:06.012 "data_size": 65536 00:45:06.012 }, 00:45:06.012 { 00:45:06.012 "name": "BaseBdev3", 00:45:06.012 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:06.012 "is_configured": true, 00:45:06.012 "data_offset": 0, 00:45:06.012 "data_size": 65536 00:45:06.012 }, 00:45:06.012 { 00:45:06.012 "name": "BaseBdev4", 00:45:06.012 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:06.012 "is_configured": true, 00:45:06.012 "data_offset": 0, 00:45:06.012 "data_size": 65536 00:45:06.012 } 00:45:06.012 ] 00:45:06.012 }' 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:06.012 05:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:07.388 05:35:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.388 05:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:07.388 "name": "raid_bdev1", 00:45:07.388 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:07.388 "strip_size_kb": 64, 00:45:07.388 "state": "online", 00:45:07.388 "raid_level": "raid5f", 00:45:07.388 "superblock": false, 00:45:07.388 "num_base_bdevs": 4, 00:45:07.388 "num_base_bdevs_discovered": 4, 00:45:07.388 "num_base_bdevs_operational": 4, 00:45:07.388 "process": { 00:45:07.388 "type": "rebuild", 00:45:07.388 "target": "spare", 00:45:07.388 "progress": { 00:45:07.388 "blocks": 153600, 00:45:07.388 "percent": 78 00:45:07.388 } 00:45:07.388 }, 00:45:07.388 "base_bdevs_list": [ 00:45:07.388 { 00:45:07.388 "name": "spare", 00:45:07.388 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:07.388 "is_configured": true, 00:45:07.388 "data_offset": 0, 00:45:07.388 "data_size": 65536 00:45:07.388 }, 00:45:07.388 { 00:45:07.388 "name": "BaseBdev2", 00:45:07.388 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:07.388 "is_configured": true, 00:45:07.388 "data_offset": 0, 00:45:07.388 "data_size": 65536 00:45:07.388 }, 00:45:07.388 { 00:45:07.388 "name": "BaseBdev3", 00:45:07.388 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:07.388 "is_configured": true, 00:45:07.388 "data_offset": 0, 00:45:07.388 "data_size": 65536 00:45:07.388 }, 00:45:07.388 { 00:45:07.388 "name": "BaseBdev4", 00:45:07.388 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:07.388 "is_configured": true, 00:45:07.388 "data_offset": 0, 00:45:07.388 "data_size": 65536 00:45:07.388 } 00:45:07.388 ] 00:45:07.388 }' 00:45:07.388 05:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:07.388 05:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:07.388 05:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:07.388 05:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:07.388 05:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:08.321 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:08.321 "name": "raid_bdev1", 00:45:08.321 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:08.321 "strip_size_kb": 64, 00:45:08.321 "state": "online", 00:45:08.321 "raid_level": "raid5f", 00:45:08.321 "superblock": false, 00:45:08.321 "num_base_bdevs": 4, 00:45:08.321 "num_base_bdevs_discovered": 4, 00:45:08.321 "num_base_bdevs_operational": 4, 00:45:08.321 "process": { 00:45:08.321 "type": "rebuild", 00:45:08.321 "target": "spare", 00:45:08.322 "progress": { 00:45:08.322 "blocks": 176640, 00:45:08.322 "percent": 89 00:45:08.322 } 00:45:08.322 }, 00:45:08.322 "base_bdevs_list": [ 00:45:08.322 { 00:45:08.322 "name": "spare", 00:45:08.322 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:08.322 "is_configured": true, 00:45:08.322 "data_offset": 0, 00:45:08.322 "data_size": 65536 00:45:08.322 }, 00:45:08.322 { 00:45:08.322 "name": "BaseBdev2", 00:45:08.322 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:08.322 "is_configured": true, 00:45:08.322 "data_offset": 0, 00:45:08.322 "data_size": 65536 00:45:08.322 }, 00:45:08.322 { 00:45:08.322 "name": "BaseBdev3", 00:45:08.322 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:08.322 "is_configured": true, 00:45:08.322 "data_offset": 0, 00:45:08.322 "data_size": 65536 00:45:08.322 }, 00:45:08.322 { 00:45:08.322 "name": "BaseBdev4", 00:45:08.322 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:08.322 "is_configured": true, 00:45:08.322 "data_offset": 0, 00:45:08.322 "data_size": 65536 00:45:08.322 } 00:45:08.322 ] 00:45:08.322 }' 00:45:08.322 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:08.322 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:08.322 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:08.579 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:08.579 05:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:09.512 [2024-12-09 05:35:56.226186] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:45:09.512 [2024-12-09 05:35:56.226299] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:45:09.512 [2024-12-09 05:35:56.226382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.512 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:09.512 "name": "raid_bdev1", 00:45:09.512 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:09.512 "strip_size_kb": 64, 00:45:09.512 "state": "online", 00:45:09.512 "raid_level": "raid5f", 00:45:09.512 "superblock": false, 00:45:09.512 "num_base_bdevs": 4, 00:45:09.512 "num_base_bdevs_discovered": 4, 00:45:09.512 "num_base_bdevs_operational": 4, 00:45:09.512 "base_bdevs_list": [ 00:45:09.512 { 00:45:09.512 "name": "spare", 00:45:09.512 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:09.512 "is_configured": true, 00:45:09.512 "data_offset": 0, 00:45:09.512 "data_size": 65536 00:45:09.512 }, 00:45:09.512 { 00:45:09.512 "name": "BaseBdev2", 00:45:09.512 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:09.512 "is_configured": true, 00:45:09.512 "data_offset": 0, 00:45:09.512 "data_size": 65536 00:45:09.512 }, 00:45:09.512 { 00:45:09.512 "name": "BaseBdev3", 00:45:09.512 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:09.513 "is_configured": true, 00:45:09.513 "data_offset": 0, 00:45:09.513 "data_size": 65536 00:45:09.513 }, 00:45:09.513 { 00:45:09.513 "name": "BaseBdev4", 00:45:09.513 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:09.513 "is_configured": true, 00:45:09.513 "data_offset": 0, 00:45:09.513 "data_size": 65536 00:45:09.513 } 00:45:09.513 ] 00:45:09.513 }' 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:09.513 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:09.771 "name": "raid_bdev1", 00:45:09.771 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:09.771 "strip_size_kb": 64, 00:45:09.771 "state": "online", 00:45:09.771 "raid_level": "raid5f", 00:45:09.771 "superblock": false, 00:45:09.771 "num_base_bdevs": 4, 00:45:09.771 "num_base_bdevs_discovered": 4, 00:45:09.771 "num_base_bdevs_operational": 4, 00:45:09.771 "base_bdevs_list": [ 00:45:09.771 { 00:45:09.771 "name": "spare", 00:45:09.771 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 }, 00:45:09.771 { 00:45:09.771 "name": "BaseBdev2", 00:45:09.771 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 }, 00:45:09.771 { 00:45:09.771 "name": "BaseBdev3", 00:45:09.771 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 }, 00:45:09.771 { 00:45:09.771 "name": "BaseBdev4", 00:45:09.771 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 } 00:45:09.771 ] 00:45:09.771 }' 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:09.771 "name": "raid_bdev1", 00:45:09.771 "uuid": "bfd35a4e-5a6f-40e5-9632-3e8480cf2a02", 00:45:09.771 "strip_size_kb": 64, 00:45:09.771 "state": "online", 00:45:09.771 "raid_level": "raid5f", 00:45:09.771 "superblock": false, 00:45:09.771 "num_base_bdevs": 4, 00:45:09.771 "num_base_bdevs_discovered": 4, 00:45:09.771 "num_base_bdevs_operational": 4, 00:45:09.771 "base_bdevs_list": [ 00:45:09.771 { 00:45:09.771 "name": "spare", 00:45:09.771 "uuid": "24c0990a-b94c-59d9-a413-1230e5fa9f7f", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 }, 00:45:09.771 { 00:45:09.771 "name": "BaseBdev2", 00:45:09.771 "uuid": "5eb06385-426d-5529-aac1-a9350975e7b6", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 }, 00:45:09.771 { 00:45:09.771 "name": "BaseBdev3", 00:45:09.771 "uuid": "f03370be-e186-50bf-9f36-f29f9b2b7de2", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 }, 00:45:09.771 { 00:45:09.771 "name": "BaseBdev4", 00:45:09.771 "uuid": "7dce3c36-fb52-526a-8908-7eaa724e1da3", 00:45:09.771 "is_configured": true, 00:45:09.771 "data_offset": 0, 00:45:09.771 "data_size": 65536 00:45:09.771 } 00:45:09.771 ] 00:45:09.771 }' 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:09.771 05:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:10.338 [2024-12-09 05:35:57.150779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:10.338 [2024-12-09 05:35:57.150855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:10.338 [2024-12-09 05:35:57.150979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:10.338 [2024-12-09 05:35:57.151136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:10.338 [2024-12-09 05:35:57.151154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:10.338 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:45:10.597 /dev/nbd0 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:10.597 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:10.857 1+0 records in 00:45:10.857 1+0 records out 00:45:10.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581876 s, 7.0 MB/s 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:10.857 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:45:10.857 /dev/nbd1 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:11.115 1+0 records in 00:45:11.115 1+0 records out 00:45:11.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489007 s, 8.4 MB/s 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:11.115 05:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:11.115 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:11.373 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85131 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85131 ']' 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85131 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85131 00:45:11.940 killing process with pid 85131 00:45:11.940 Received shutdown signal, test time was about 60.000000 seconds 00:45:11.940 00:45:11.940 Latency(us) 00:45:11.940 [2024-12-09T05:35:58.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:11.940 [2024-12-09T05:35:58.912Z] =================================================================================================================== 00:45:11.940 [2024-12-09T05:35:58.912Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:11.940 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:11.941 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:11.941 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85131' 00:45:11.941 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85131 00:45:11.941 [2024-12-09 05:35:58.743924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:45:11.941 05:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85131 00:45:12.550 [2024-12-09 05:35:59.203215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:45:13.482 05:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:45:13.482 00:45:13.482 real 0m20.410s 00:45:13.482 user 0m25.378s 00:45:13.482 sys 0m2.348s 00:45:13.482 05:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:13.482 05:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:45:13.482 ************************************ 00:45:13.482 END TEST raid5f_rebuild_test 00:45:13.482 ************************************ 00:45:13.741 05:36:00 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:45:13.741 05:36:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:45:13.741 05:36:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:13.741 05:36:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:45:13.741 ************************************ 00:45:13.741 START TEST raid5f_rebuild_test_sb 00:45:13.741 ************************************ 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85640 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85640 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85640 ']' 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:13.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:13.741 05:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:13.741 I/O size of 3145728 is greater than zero copy threshold (65536). 00:45:13.741 Zero copy mechanism will not be used. 00:45:13.741 [2024-12-09 05:36:00.599585] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:13.741 [2024-12-09 05:36:00.599780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85640 ] 00:45:14.000 [2024-12-09 05:36:00.786830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:14.000 [2024-12-09 05:36:00.920484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:14.257 [2024-12-09 05:36:01.131012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:14.257 [2024-12-09 05:36:01.131308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:14.822 BaseBdev1_malloc 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:14.822 [2024-12-09 05:36:01.670420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:45:14.822 [2024-12-09 05:36:01.670602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:14.822 [2024-12-09 05:36:01.670638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:45:14.822 [2024-12-09 05:36:01.670658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:14.822 [2024-12-09 05:36:01.673599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:14.822 [2024-12-09 05:36:01.673651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:45:14.822 BaseBdev1 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:14.822 BaseBdev2_malloc 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:14.822 [2024-12-09 05:36:01.719932] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:45:14.822 [2024-12-09 05:36:01.720014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:14.822 [2024-12-09 05:36:01.720050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:45:14.822 [2024-12-09 05:36:01.720070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:14.822 [2024-12-09 05:36:01.722946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:14.822 [2024-12-09 05:36:01.723000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:45:14.822 BaseBdev2 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:14.822 BaseBdev3_malloc 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.822 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:14.822 [2024-12-09 05:36:01.789407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:45:14.822 [2024-12-09 05:36:01.789500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:14.823 [2024-12-09 05:36:01.789534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:45:14.823 [2024-12-09 05:36:01.789554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:14.823 [2024-12-09 05:36:01.792560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:14.823 [2024-12-09 05:36:01.792613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:45:15.081 BaseBdev3 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 BaseBdev4_malloc 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 [2024-12-09 05:36:01.843602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:45:15.081 [2024-12-09 05:36:01.843956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:15.081 [2024-12-09 05:36:01.843996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:45:15.081 [2024-12-09 05:36:01.844017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:15.081 [2024-12-09 05:36:01.847166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:15.081 [2024-12-09 05:36:01.847358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:45:15.081 BaseBdev4 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 spare_malloc 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 spare_delay 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 [2024-12-09 05:36:01.912049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:45:15.081 [2024-12-09 05:36:01.912121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:15.081 [2024-12-09 05:36:01.912148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:45:15.081 [2024-12-09 05:36:01.912166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:15.081 [2024-12-09 05:36:01.915353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:15.081 [2024-12-09 05:36:01.915583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:45:15.081 spare 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 [2024-12-09 05:36:01.920258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:15.081 [2024-12-09 05:36:01.923104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:45:15.081 [2024-12-09 05:36:01.923326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:45:15.081 [2024-12-09 05:36:01.923457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:45:15.081 [2024-12-09 05:36:01.923842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:45:15.081 [2024-12-09 05:36:01.923908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:45:15.081 [2024-12-09 05:36:01.924311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:45:15.081 [2024-12-09 05:36:01.931826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:45:15.081 [2024-12-09 05:36:01.931991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:45:15.081 [2024-12-09 05:36:01.932399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:15.081 "name": "raid_bdev1", 00:45:15.081 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:15.081 "strip_size_kb": 64, 00:45:15.081 "state": "online", 00:45:15.081 "raid_level": "raid5f", 00:45:15.081 "superblock": true, 00:45:15.081 "num_base_bdevs": 4, 00:45:15.081 "num_base_bdevs_discovered": 4, 00:45:15.081 "num_base_bdevs_operational": 4, 00:45:15.081 "base_bdevs_list": [ 00:45:15.081 { 00:45:15.081 "name": "BaseBdev1", 00:45:15.081 "uuid": "45469772-68e9-57e6-bf88-a621259be54c", 00:45:15.081 "is_configured": true, 00:45:15.081 "data_offset": 2048, 00:45:15.081 "data_size": 63488 00:45:15.081 }, 00:45:15.081 { 00:45:15.081 "name": "BaseBdev2", 00:45:15.081 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:15.081 "is_configured": true, 00:45:15.081 "data_offset": 2048, 00:45:15.081 "data_size": 63488 00:45:15.081 }, 00:45:15.081 { 00:45:15.081 "name": "BaseBdev3", 00:45:15.081 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:15.081 "is_configured": true, 00:45:15.081 "data_offset": 2048, 00:45:15.081 "data_size": 63488 00:45:15.081 }, 00:45:15.081 { 00:45:15.081 "name": "BaseBdev4", 00:45:15.081 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:15.081 "is_configured": true, 00:45:15.081 "data_offset": 2048, 00:45:15.081 "data_size": 63488 00:45:15.081 } 00:45:15.081 ] 00:45:15.081 }' 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:15.081 05:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.660 [2024-12-09 05:36:02.449281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:15.660 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:45:15.918 [2024-12-09 05:36:02.849195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:45:15.918 /dev/nbd0 00:45:15.918 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:16.176 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:16.177 1+0 records in 00:45:16.177 1+0 records out 00:45:16.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316064 s, 13.0 MB/s 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:45:16.177 05:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:45:16.743 496+0 records in 00:45:16.743 496+0 records out 00:45:16.743 97517568 bytes (98 MB, 93 MiB) copied, 0.620845 s, 157 MB/s 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:16.743 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:45:17.002 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:17.002 [2024-12-09 05:36:03.862504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:17.002 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:17.002 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:17.002 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:17.002 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:17.003 [2024-12-09 05:36:03.874760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:17.003 "name": "raid_bdev1", 00:45:17.003 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:17.003 "strip_size_kb": 64, 00:45:17.003 "state": "online", 00:45:17.003 "raid_level": "raid5f", 00:45:17.003 "superblock": true, 00:45:17.003 "num_base_bdevs": 4, 00:45:17.003 "num_base_bdevs_discovered": 3, 00:45:17.003 "num_base_bdevs_operational": 3, 00:45:17.003 "base_bdevs_list": [ 00:45:17.003 { 00:45:17.003 "name": null, 00:45:17.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:17.003 "is_configured": false, 00:45:17.003 "data_offset": 0, 00:45:17.003 "data_size": 63488 00:45:17.003 }, 00:45:17.003 { 00:45:17.003 "name": "BaseBdev2", 00:45:17.003 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:17.003 "is_configured": true, 00:45:17.003 "data_offset": 2048, 00:45:17.003 "data_size": 63488 00:45:17.003 }, 00:45:17.003 { 00:45:17.003 "name": "BaseBdev3", 00:45:17.003 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:17.003 "is_configured": true, 00:45:17.003 "data_offset": 2048, 00:45:17.003 "data_size": 63488 00:45:17.003 }, 00:45:17.003 { 00:45:17.003 "name": "BaseBdev4", 00:45:17.003 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:17.003 "is_configured": true, 00:45:17.003 "data_offset": 2048, 00:45:17.003 "data_size": 63488 00:45:17.003 } 00:45:17.003 ] 00:45:17.003 }' 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:17.003 05:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:17.570 05:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:45:17.570 05:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.570 05:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:17.570 [2024-12-09 05:36:04.386932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:17.570 [2024-12-09 05:36:04.402287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:45:17.570 05:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.570 05:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:45:17.570 [2024-12-09 05:36:04.411808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:18.505 "name": "raid_bdev1", 00:45:18.505 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:18.505 "strip_size_kb": 64, 00:45:18.505 "state": "online", 00:45:18.505 "raid_level": "raid5f", 00:45:18.505 "superblock": true, 00:45:18.505 "num_base_bdevs": 4, 00:45:18.505 "num_base_bdevs_discovered": 4, 00:45:18.505 "num_base_bdevs_operational": 4, 00:45:18.505 "process": { 00:45:18.505 "type": "rebuild", 00:45:18.505 "target": "spare", 00:45:18.505 "progress": { 00:45:18.505 "blocks": 17280, 00:45:18.505 "percent": 9 00:45:18.505 } 00:45:18.505 }, 00:45:18.505 "base_bdevs_list": [ 00:45:18.505 { 00:45:18.505 "name": "spare", 00:45:18.505 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:18.505 "is_configured": true, 00:45:18.505 "data_offset": 2048, 00:45:18.505 "data_size": 63488 00:45:18.505 }, 00:45:18.505 { 00:45:18.505 "name": "BaseBdev2", 00:45:18.505 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:18.505 "is_configured": true, 00:45:18.505 "data_offset": 2048, 00:45:18.505 "data_size": 63488 00:45:18.505 }, 00:45:18.505 { 00:45:18.505 "name": "BaseBdev3", 00:45:18.505 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:18.505 "is_configured": true, 00:45:18.505 "data_offset": 2048, 00:45:18.505 "data_size": 63488 00:45:18.505 }, 00:45:18.505 { 00:45:18.505 "name": "BaseBdev4", 00:45:18.505 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:18.505 "is_configured": true, 00:45:18.505 "data_offset": 2048, 00:45:18.505 "data_size": 63488 00:45:18.505 } 00:45:18.505 ] 00:45:18.505 }' 00:45:18.505 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:18.763 [2024-12-09 05:36:05.577442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:18.763 [2024-12-09 05:36:05.624060] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:45:18.763 [2024-12-09 05:36:05.624301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:18.763 [2024-12-09 05:36:05.624480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:18.763 [2024-12-09 05:36:05.624599] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.763 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:18.763 "name": "raid_bdev1", 00:45:18.763 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:18.763 "strip_size_kb": 64, 00:45:18.763 "state": "online", 00:45:18.763 "raid_level": "raid5f", 00:45:18.763 "superblock": true, 00:45:18.763 "num_base_bdevs": 4, 00:45:18.763 "num_base_bdevs_discovered": 3, 00:45:18.763 "num_base_bdevs_operational": 3, 00:45:18.763 "base_bdevs_list": [ 00:45:18.763 { 00:45:18.763 "name": null, 00:45:18.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:18.763 "is_configured": false, 00:45:18.763 "data_offset": 0, 00:45:18.763 "data_size": 63488 00:45:18.763 }, 00:45:18.763 { 00:45:18.763 "name": "BaseBdev2", 00:45:18.763 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:18.763 "is_configured": true, 00:45:18.763 "data_offset": 2048, 00:45:18.763 "data_size": 63488 00:45:18.763 }, 00:45:18.763 { 00:45:18.763 "name": "BaseBdev3", 00:45:18.763 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:18.763 "is_configured": true, 00:45:18.763 "data_offset": 2048, 00:45:18.763 "data_size": 63488 00:45:18.763 }, 00:45:18.764 { 00:45:18.764 "name": "BaseBdev4", 00:45:18.764 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:18.764 "is_configured": true, 00:45:18.764 "data_offset": 2048, 00:45:18.764 "data_size": 63488 00:45:18.764 } 00:45:18.764 ] 00:45:18.764 }' 00:45:18.764 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:18.764 05:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:19.330 "name": "raid_bdev1", 00:45:19.330 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:19.330 "strip_size_kb": 64, 00:45:19.330 "state": "online", 00:45:19.330 "raid_level": "raid5f", 00:45:19.330 "superblock": true, 00:45:19.330 "num_base_bdevs": 4, 00:45:19.330 "num_base_bdevs_discovered": 3, 00:45:19.330 "num_base_bdevs_operational": 3, 00:45:19.330 "base_bdevs_list": [ 00:45:19.330 { 00:45:19.330 "name": null, 00:45:19.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:19.330 "is_configured": false, 00:45:19.330 "data_offset": 0, 00:45:19.330 "data_size": 63488 00:45:19.330 }, 00:45:19.330 { 00:45:19.330 "name": "BaseBdev2", 00:45:19.330 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:19.330 "is_configured": true, 00:45:19.330 "data_offset": 2048, 00:45:19.330 "data_size": 63488 00:45:19.330 }, 00:45:19.330 { 00:45:19.330 "name": "BaseBdev3", 00:45:19.330 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:19.330 "is_configured": true, 00:45:19.330 "data_offset": 2048, 00:45:19.330 "data_size": 63488 00:45:19.330 }, 00:45:19.330 { 00:45:19.330 "name": "BaseBdev4", 00:45:19.330 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:19.330 "is_configured": true, 00:45:19.330 "data_offset": 2048, 00:45:19.330 "data_size": 63488 00:45:19.330 } 00:45:19.330 ] 00:45:19.330 }' 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:19.330 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:19.589 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:19.589 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:45:19.589 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:19.589 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:19.589 [2024-12-09 05:36:06.329241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:19.589 [2024-12-09 05:36:06.344691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:45:19.589 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:19.589 05:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:45:19.589 [2024-12-09 05:36:06.354840] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:20.524 "name": "raid_bdev1", 00:45:20.524 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:20.524 "strip_size_kb": 64, 00:45:20.524 "state": "online", 00:45:20.524 "raid_level": "raid5f", 00:45:20.524 "superblock": true, 00:45:20.524 "num_base_bdevs": 4, 00:45:20.524 "num_base_bdevs_discovered": 4, 00:45:20.524 "num_base_bdevs_operational": 4, 00:45:20.524 "process": { 00:45:20.524 "type": "rebuild", 00:45:20.524 "target": "spare", 00:45:20.524 "progress": { 00:45:20.524 "blocks": 17280, 00:45:20.524 "percent": 9 00:45:20.524 } 00:45:20.524 }, 00:45:20.524 "base_bdevs_list": [ 00:45:20.524 { 00:45:20.524 "name": "spare", 00:45:20.524 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:20.524 "is_configured": true, 00:45:20.524 "data_offset": 2048, 00:45:20.524 "data_size": 63488 00:45:20.524 }, 00:45:20.524 { 00:45:20.524 "name": "BaseBdev2", 00:45:20.524 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:20.524 "is_configured": true, 00:45:20.524 "data_offset": 2048, 00:45:20.524 "data_size": 63488 00:45:20.524 }, 00:45:20.524 { 00:45:20.524 "name": "BaseBdev3", 00:45:20.524 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:20.524 "is_configured": true, 00:45:20.524 "data_offset": 2048, 00:45:20.524 "data_size": 63488 00:45:20.524 }, 00:45:20.524 { 00:45:20.524 "name": "BaseBdev4", 00:45:20.524 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:20.524 "is_configured": true, 00:45:20.524 "data_offset": 2048, 00:45:20.524 "data_size": 63488 00:45:20.524 } 00:45:20.524 ] 00:45:20.524 }' 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:20.524 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:45:20.783 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=703 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:20.783 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.784 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:20.784 "name": "raid_bdev1", 00:45:20.784 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:20.784 "strip_size_kb": 64, 00:45:20.784 "state": "online", 00:45:20.784 "raid_level": "raid5f", 00:45:20.784 "superblock": true, 00:45:20.784 "num_base_bdevs": 4, 00:45:20.784 "num_base_bdevs_discovered": 4, 00:45:20.784 "num_base_bdevs_operational": 4, 00:45:20.784 "process": { 00:45:20.784 "type": "rebuild", 00:45:20.784 "target": "spare", 00:45:20.784 "progress": { 00:45:20.784 "blocks": 21120, 00:45:20.784 "percent": 11 00:45:20.784 } 00:45:20.784 }, 00:45:20.784 "base_bdevs_list": [ 00:45:20.784 { 00:45:20.784 "name": "spare", 00:45:20.784 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:20.784 "is_configured": true, 00:45:20.784 "data_offset": 2048, 00:45:20.784 "data_size": 63488 00:45:20.784 }, 00:45:20.784 { 00:45:20.784 "name": "BaseBdev2", 00:45:20.784 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:20.784 "is_configured": true, 00:45:20.784 "data_offset": 2048, 00:45:20.784 "data_size": 63488 00:45:20.784 }, 00:45:20.784 { 00:45:20.784 "name": "BaseBdev3", 00:45:20.784 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:20.784 "is_configured": true, 00:45:20.784 "data_offset": 2048, 00:45:20.784 "data_size": 63488 00:45:20.784 }, 00:45:20.784 { 00:45:20.784 "name": "BaseBdev4", 00:45:20.784 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:20.784 "is_configured": true, 00:45:20.784 "data_offset": 2048, 00:45:20.784 "data_size": 63488 00:45:20.784 } 00:45:20.784 ] 00:45:20.784 }' 00:45:20.784 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:20.784 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:20.784 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:20.784 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:20.784 05:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:21.720 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:21.979 "name": "raid_bdev1", 00:45:21.979 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:21.979 "strip_size_kb": 64, 00:45:21.979 "state": "online", 00:45:21.979 "raid_level": "raid5f", 00:45:21.979 "superblock": true, 00:45:21.979 "num_base_bdevs": 4, 00:45:21.979 "num_base_bdevs_discovered": 4, 00:45:21.979 "num_base_bdevs_operational": 4, 00:45:21.979 "process": { 00:45:21.979 "type": "rebuild", 00:45:21.979 "target": "spare", 00:45:21.979 "progress": { 00:45:21.979 "blocks": 44160, 00:45:21.979 "percent": 23 00:45:21.979 } 00:45:21.979 }, 00:45:21.979 "base_bdevs_list": [ 00:45:21.979 { 00:45:21.979 "name": "spare", 00:45:21.979 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:21.979 "is_configured": true, 00:45:21.979 "data_offset": 2048, 00:45:21.979 "data_size": 63488 00:45:21.979 }, 00:45:21.979 { 00:45:21.979 "name": "BaseBdev2", 00:45:21.979 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:21.979 "is_configured": true, 00:45:21.979 "data_offset": 2048, 00:45:21.979 "data_size": 63488 00:45:21.979 }, 00:45:21.979 { 00:45:21.979 "name": "BaseBdev3", 00:45:21.979 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:21.979 "is_configured": true, 00:45:21.979 "data_offset": 2048, 00:45:21.979 "data_size": 63488 00:45:21.979 }, 00:45:21.979 { 00:45:21.979 "name": "BaseBdev4", 00:45:21.979 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:21.979 "is_configured": true, 00:45:21.979 "data_offset": 2048, 00:45:21.979 "data_size": 63488 00:45:21.979 } 00:45:21.979 ] 00:45:21.979 }' 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:21.979 05:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:22.916 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:23.175 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:23.175 "name": "raid_bdev1", 00:45:23.175 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:23.175 "strip_size_kb": 64, 00:45:23.175 "state": "online", 00:45:23.175 "raid_level": "raid5f", 00:45:23.175 "superblock": true, 00:45:23.175 "num_base_bdevs": 4, 00:45:23.175 "num_base_bdevs_discovered": 4, 00:45:23.175 "num_base_bdevs_operational": 4, 00:45:23.175 "process": { 00:45:23.175 "type": "rebuild", 00:45:23.175 "target": "spare", 00:45:23.175 "progress": { 00:45:23.175 "blocks": 65280, 00:45:23.175 "percent": 34 00:45:23.175 } 00:45:23.175 }, 00:45:23.175 "base_bdevs_list": [ 00:45:23.175 { 00:45:23.175 "name": "spare", 00:45:23.175 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:23.175 "is_configured": true, 00:45:23.175 "data_offset": 2048, 00:45:23.175 "data_size": 63488 00:45:23.175 }, 00:45:23.175 { 00:45:23.175 "name": "BaseBdev2", 00:45:23.175 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:23.175 "is_configured": true, 00:45:23.175 "data_offset": 2048, 00:45:23.175 "data_size": 63488 00:45:23.175 }, 00:45:23.175 { 00:45:23.175 "name": "BaseBdev3", 00:45:23.175 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:23.175 "is_configured": true, 00:45:23.175 "data_offset": 2048, 00:45:23.175 "data_size": 63488 00:45:23.175 }, 00:45:23.175 { 00:45:23.175 "name": "BaseBdev4", 00:45:23.175 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:23.175 "is_configured": true, 00:45:23.175 "data_offset": 2048, 00:45:23.175 "data_size": 63488 00:45:23.175 } 00:45:23.175 ] 00:45:23.175 }' 00:45:23.175 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:23.175 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:23.175 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:23.175 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:23.175 05:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:24.108 "name": "raid_bdev1", 00:45:24.108 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:24.108 "strip_size_kb": 64, 00:45:24.108 "state": "online", 00:45:24.108 "raid_level": "raid5f", 00:45:24.108 "superblock": true, 00:45:24.108 "num_base_bdevs": 4, 00:45:24.108 "num_base_bdevs_discovered": 4, 00:45:24.108 "num_base_bdevs_operational": 4, 00:45:24.108 "process": { 00:45:24.108 "type": "rebuild", 00:45:24.108 "target": "spare", 00:45:24.108 "progress": { 00:45:24.108 "blocks": 88320, 00:45:24.108 "percent": 46 00:45:24.108 } 00:45:24.108 }, 00:45:24.108 "base_bdevs_list": [ 00:45:24.108 { 00:45:24.108 "name": "spare", 00:45:24.108 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:24.108 "is_configured": true, 00:45:24.108 "data_offset": 2048, 00:45:24.108 "data_size": 63488 00:45:24.108 }, 00:45:24.108 { 00:45:24.108 "name": "BaseBdev2", 00:45:24.108 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:24.108 "is_configured": true, 00:45:24.108 "data_offset": 2048, 00:45:24.108 "data_size": 63488 00:45:24.108 }, 00:45:24.108 { 00:45:24.108 "name": "BaseBdev3", 00:45:24.108 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:24.108 "is_configured": true, 00:45:24.108 "data_offset": 2048, 00:45:24.108 "data_size": 63488 00:45:24.108 }, 00:45:24.108 { 00:45:24.108 "name": "BaseBdev4", 00:45:24.108 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:24.108 "is_configured": true, 00:45:24.108 "data_offset": 2048, 00:45:24.108 "data_size": 63488 00:45:24.108 } 00:45:24.108 ] 00:45:24.108 }' 00:45:24.108 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:24.366 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:24.366 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:24.366 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:24.366 05:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:25.301 "name": "raid_bdev1", 00:45:25.301 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:25.301 "strip_size_kb": 64, 00:45:25.301 "state": "online", 00:45:25.301 "raid_level": "raid5f", 00:45:25.301 "superblock": true, 00:45:25.301 "num_base_bdevs": 4, 00:45:25.301 "num_base_bdevs_discovered": 4, 00:45:25.301 "num_base_bdevs_operational": 4, 00:45:25.301 "process": { 00:45:25.301 "type": "rebuild", 00:45:25.301 "target": "spare", 00:45:25.301 "progress": { 00:45:25.301 "blocks": 109440, 00:45:25.301 "percent": 57 00:45:25.301 } 00:45:25.301 }, 00:45:25.301 "base_bdevs_list": [ 00:45:25.301 { 00:45:25.301 "name": "spare", 00:45:25.301 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:25.301 "is_configured": true, 00:45:25.301 "data_offset": 2048, 00:45:25.301 "data_size": 63488 00:45:25.301 }, 00:45:25.301 { 00:45:25.301 "name": "BaseBdev2", 00:45:25.301 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:25.301 "is_configured": true, 00:45:25.301 "data_offset": 2048, 00:45:25.301 "data_size": 63488 00:45:25.301 }, 00:45:25.301 { 00:45:25.301 "name": "BaseBdev3", 00:45:25.301 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:25.301 "is_configured": true, 00:45:25.301 "data_offset": 2048, 00:45:25.301 "data_size": 63488 00:45:25.301 }, 00:45:25.301 { 00:45:25.301 "name": "BaseBdev4", 00:45:25.301 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:25.301 "is_configured": true, 00:45:25.301 "data_offset": 2048, 00:45:25.301 "data_size": 63488 00:45:25.301 } 00:45:25.301 ] 00:45:25.301 }' 00:45:25.301 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:25.559 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:25.559 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:25.559 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:25.559 05:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:26.491 "name": "raid_bdev1", 00:45:26.491 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:26.491 "strip_size_kb": 64, 00:45:26.491 "state": "online", 00:45:26.491 "raid_level": "raid5f", 00:45:26.491 "superblock": true, 00:45:26.491 "num_base_bdevs": 4, 00:45:26.491 "num_base_bdevs_discovered": 4, 00:45:26.491 "num_base_bdevs_operational": 4, 00:45:26.491 "process": { 00:45:26.491 "type": "rebuild", 00:45:26.491 "target": "spare", 00:45:26.491 "progress": { 00:45:26.491 "blocks": 132480, 00:45:26.491 "percent": 69 00:45:26.491 } 00:45:26.491 }, 00:45:26.491 "base_bdevs_list": [ 00:45:26.491 { 00:45:26.491 "name": "spare", 00:45:26.491 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:26.491 "is_configured": true, 00:45:26.491 "data_offset": 2048, 00:45:26.491 "data_size": 63488 00:45:26.491 }, 00:45:26.491 { 00:45:26.491 "name": "BaseBdev2", 00:45:26.491 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:26.491 "is_configured": true, 00:45:26.491 "data_offset": 2048, 00:45:26.491 "data_size": 63488 00:45:26.491 }, 00:45:26.491 { 00:45:26.491 "name": "BaseBdev3", 00:45:26.491 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:26.491 "is_configured": true, 00:45:26.491 "data_offset": 2048, 00:45:26.491 "data_size": 63488 00:45:26.491 }, 00:45:26.491 { 00:45:26.491 "name": "BaseBdev4", 00:45:26.491 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:26.491 "is_configured": true, 00:45:26.491 "data_offset": 2048, 00:45:26.491 "data_size": 63488 00:45:26.491 } 00:45:26.491 ] 00:45:26.491 }' 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:26.491 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:26.749 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:26.749 05:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:27.682 "name": "raid_bdev1", 00:45:27.682 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:27.682 "strip_size_kb": 64, 00:45:27.682 "state": "online", 00:45:27.682 "raid_level": "raid5f", 00:45:27.682 "superblock": true, 00:45:27.682 "num_base_bdevs": 4, 00:45:27.682 "num_base_bdevs_discovered": 4, 00:45:27.682 "num_base_bdevs_operational": 4, 00:45:27.682 "process": { 00:45:27.682 "type": "rebuild", 00:45:27.682 "target": "spare", 00:45:27.682 "progress": { 00:45:27.682 "blocks": 153600, 00:45:27.682 "percent": 80 00:45:27.682 } 00:45:27.682 }, 00:45:27.682 "base_bdevs_list": [ 00:45:27.682 { 00:45:27.682 "name": "spare", 00:45:27.682 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:27.682 "is_configured": true, 00:45:27.682 "data_offset": 2048, 00:45:27.682 "data_size": 63488 00:45:27.682 }, 00:45:27.682 { 00:45:27.682 "name": "BaseBdev2", 00:45:27.682 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:27.682 "is_configured": true, 00:45:27.682 "data_offset": 2048, 00:45:27.682 "data_size": 63488 00:45:27.682 }, 00:45:27.682 { 00:45:27.682 "name": "BaseBdev3", 00:45:27.682 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:27.682 "is_configured": true, 00:45:27.682 "data_offset": 2048, 00:45:27.682 "data_size": 63488 00:45:27.682 }, 00:45:27.682 { 00:45:27.682 "name": "BaseBdev4", 00:45:27.682 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:27.682 "is_configured": true, 00:45:27.682 "data_offset": 2048, 00:45:27.682 "data_size": 63488 00:45:27.682 } 00:45:27.682 ] 00:45:27.682 }' 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:27.682 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:27.941 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:27.941 05:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:28.877 "name": "raid_bdev1", 00:45:28.877 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:28.877 "strip_size_kb": 64, 00:45:28.877 "state": "online", 00:45:28.877 "raid_level": "raid5f", 00:45:28.877 "superblock": true, 00:45:28.877 "num_base_bdevs": 4, 00:45:28.877 "num_base_bdevs_discovered": 4, 00:45:28.877 "num_base_bdevs_operational": 4, 00:45:28.877 "process": { 00:45:28.877 "type": "rebuild", 00:45:28.877 "target": "spare", 00:45:28.877 "progress": { 00:45:28.877 "blocks": 176640, 00:45:28.877 "percent": 92 00:45:28.877 } 00:45:28.877 }, 00:45:28.877 "base_bdevs_list": [ 00:45:28.877 { 00:45:28.877 "name": "spare", 00:45:28.877 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:28.877 "is_configured": true, 00:45:28.877 "data_offset": 2048, 00:45:28.877 "data_size": 63488 00:45:28.877 }, 00:45:28.877 { 00:45:28.877 "name": "BaseBdev2", 00:45:28.877 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:28.877 "is_configured": true, 00:45:28.877 "data_offset": 2048, 00:45:28.877 "data_size": 63488 00:45:28.877 }, 00:45:28.877 { 00:45:28.877 "name": "BaseBdev3", 00:45:28.877 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:28.877 "is_configured": true, 00:45:28.877 "data_offset": 2048, 00:45:28.877 "data_size": 63488 00:45:28.877 }, 00:45:28.877 { 00:45:28.877 "name": "BaseBdev4", 00:45:28.877 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:28.877 "is_configured": true, 00:45:28.877 "data_offset": 2048, 00:45:28.877 "data_size": 63488 00:45:28.877 } 00:45:28.877 ] 00:45:28.877 }' 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:28.877 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:29.136 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:29.136 05:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:45:29.702 [2024-12-09 05:36:16.452830] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:45:29.702 [2024-12-09 05:36:16.452952] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:45:29.702 [2024-12-09 05:36:16.453197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:29.960 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:45:29.960 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:29.960 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:29.960 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:29.960 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:29.961 "name": "raid_bdev1", 00:45:29.961 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:29.961 "strip_size_kb": 64, 00:45:29.961 "state": "online", 00:45:29.961 "raid_level": "raid5f", 00:45:29.961 "superblock": true, 00:45:29.961 "num_base_bdevs": 4, 00:45:29.961 "num_base_bdevs_discovered": 4, 00:45:29.961 "num_base_bdevs_operational": 4, 00:45:29.961 "base_bdevs_list": [ 00:45:29.961 { 00:45:29.961 "name": "spare", 00:45:29.961 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:29.961 "is_configured": true, 00:45:29.961 "data_offset": 2048, 00:45:29.961 "data_size": 63488 00:45:29.961 }, 00:45:29.961 { 00:45:29.961 "name": "BaseBdev2", 00:45:29.961 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:29.961 "is_configured": true, 00:45:29.961 "data_offset": 2048, 00:45:29.961 "data_size": 63488 00:45:29.961 }, 00:45:29.961 { 00:45:29.961 "name": "BaseBdev3", 00:45:29.961 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:29.961 "is_configured": true, 00:45:29.961 "data_offset": 2048, 00:45:29.961 "data_size": 63488 00:45:29.961 }, 00:45:29.961 { 00:45:29.961 "name": "BaseBdev4", 00:45:29.961 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:29.961 "is_configured": true, 00:45:29.961 "data_offset": 2048, 00:45:29.961 "data_size": 63488 00:45:29.961 } 00:45:29.961 ] 00:45:29.961 }' 00:45:29.961 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:30.220 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:45:30.220 05:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:30.220 "name": "raid_bdev1", 00:45:30.220 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:30.220 "strip_size_kb": 64, 00:45:30.220 "state": "online", 00:45:30.220 "raid_level": "raid5f", 00:45:30.220 "superblock": true, 00:45:30.220 "num_base_bdevs": 4, 00:45:30.220 "num_base_bdevs_discovered": 4, 00:45:30.220 "num_base_bdevs_operational": 4, 00:45:30.220 "base_bdevs_list": [ 00:45:30.220 { 00:45:30.220 "name": "spare", 00:45:30.220 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:30.220 "is_configured": true, 00:45:30.220 "data_offset": 2048, 00:45:30.220 "data_size": 63488 00:45:30.220 }, 00:45:30.220 { 00:45:30.220 "name": "BaseBdev2", 00:45:30.220 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:30.220 "is_configured": true, 00:45:30.220 "data_offset": 2048, 00:45:30.220 "data_size": 63488 00:45:30.220 }, 00:45:30.220 { 00:45:30.220 "name": "BaseBdev3", 00:45:30.220 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:30.220 "is_configured": true, 00:45:30.220 "data_offset": 2048, 00:45:30.220 "data_size": 63488 00:45:30.220 }, 00:45:30.220 { 00:45:30.220 "name": "BaseBdev4", 00:45:30.220 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:30.220 "is_configured": true, 00:45:30.220 "data_offset": 2048, 00:45:30.220 "data_size": 63488 00:45:30.220 } 00:45:30.220 ] 00:45:30.220 }' 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:30.220 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.479 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:30.479 "name": "raid_bdev1", 00:45:30.479 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:30.479 "strip_size_kb": 64, 00:45:30.479 "state": "online", 00:45:30.479 "raid_level": "raid5f", 00:45:30.479 "superblock": true, 00:45:30.479 "num_base_bdevs": 4, 00:45:30.479 "num_base_bdevs_discovered": 4, 00:45:30.479 "num_base_bdevs_operational": 4, 00:45:30.479 "base_bdevs_list": [ 00:45:30.479 { 00:45:30.479 "name": "spare", 00:45:30.479 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:30.479 "is_configured": true, 00:45:30.479 "data_offset": 2048, 00:45:30.479 "data_size": 63488 00:45:30.480 }, 00:45:30.480 { 00:45:30.480 "name": "BaseBdev2", 00:45:30.480 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:30.480 "is_configured": true, 00:45:30.480 "data_offset": 2048, 00:45:30.480 "data_size": 63488 00:45:30.480 }, 00:45:30.480 { 00:45:30.480 "name": "BaseBdev3", 00:45:30.480 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:30.480 "is_configured": true, 00:45:30.480 "data_offset": 2048, 00:45:30.480 "data_size": 63488 00:45:30.480 }, 00:45:30.480 { 00:45:30.480 "name": "BaseBdev4", 00:45:30.480 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:30.480 "is_configured": true, 00:45:30.480 "data_offset": 2048, 00:45:30.480 "data_size": 63488 00:45:30.480 } 00:45:30.480 ] 00:45:30.480 }' 00:45:30.480 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:30.480 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:31.048 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:45:31.048 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:31.048 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:31.048 [2024-12-09 05:36:17.739107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:31.048 [2024-12-09 05:36:17.739314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:31.049 [2024-12-09 05:36:17.739458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:31.049 [2024-12-09 05:36:17.739581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:31.049 [2024-12-09 05:36:17.739628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:31.049 05:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:45:31.327 /dev/nbd0 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:31.327 1+0 records in 00:45:31.327 1+0 records out 00:45:31.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311455 s, 13.2 MB/s 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:31.327 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:45:31.586 /dev/nbd1 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:31.586 1+0 records in 00:45:31.586 1+0 records out 00:45:31.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310346 s, 13.2 MB/s 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:31.586 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:31.845 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:32.103 05:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:32.361 [2024-12-09 05:36:19.218348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:45:32.361 [2024-12-09 05:36:19.218411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:32.361 [2024-12-09 05:36:19.218443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:45:32.361 [2024-12-09 05:36:19.218458] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:32.361 [2024-12-09 05:36:19.221896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:32.361 [2024-12-09 05:36:19.221956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:45:32.361 [2024-12-09 05:36:19.222077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:45:32.361 [2024-12-09 05:36:19.222150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:32.361 [2024-12-09 05:36:19.222353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:45:32.361 spare 00:45:32.361 [2024-12-09 05:36:19.222558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:45:32.361 [2024-12-09 05:36:19.222700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.361 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:32.361 [2024-12-09 05:36:19.322845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:45:32.361 [2024-12-09 05:36:19.322897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:45:32.361 [2024-12-09 05:36:19.323288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:45:32.361 [2024-12-09 05:36:19.329883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:45:32.361 [2024-12-09 05:36:19.329911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:45:32.361 [2024-12-09 05:36:19.330152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.619 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:32.620 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:32.620 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.620 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:32.620 "name": "raid_bdev1", 00:45:32.620 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:32.620 "strip_size_kb": 64, 00:45:32.620 "state": "online", 00:45:32.620 "raid_level": "raid5f", 00:45:32.620 "superblock": true, 00:45:32.620 "num_base_bdevs": 4, 00:45:32.620 "num_base_bdevs_discovered": 4, 00:45:32.620 "num_base_bdevs_operational": 4, 00:45:32.620 "base_bdevs_list": [ 00:45:32.620 { 00:45:32.620 "name": "spare", 00:45:32.620 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:32.620 "is_configured": true, 00:45:32.620 "data_offset": 2048, 00:45:32.620 "data_size": 63488 00:45:32.620 }, 00:45:32.620 { 00:45:32.620 "name": "BaseBdev2", 00:45:32.620 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:32.620 "is_configured": true, 00:45:32.620 "data_offset": 2048, 00:45:32.620 "data_size": 63488 00:45:32.620 }, 00:45:32.620 { 00:45:32.620 "name": "BaseBdev3", 00:45:32.620 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:32.620 "is_configured": true, 00:45:32.620 "data_offset": 2048, 00:45:32.620 "data_size": 63488 00:45:32.620 }, 00:45:32.620 { 00:45:32.620 "name": "BaseBdev4", 00:45:32.620 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:32.620 "is_configured": true, 00:45:32.620 "data_offset": 2048, 00:45:32.620 "data_size": 63488 00:45:32.620 } 00:45:32.620 ] 00:45:32.620 }' 00:45:32.620 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:32.620 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.186 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:33.186 "name": "raid_bdev1", 00:45:33.186 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:33.186 "strip_size_kb": 64, 00:45:33.186 "state": "online", 00:45:33.186 "raid_level": "raid5f", 00:45:33.186 "superblock": true, 00:45:33.186 "num_base_bdevs": 4, 00:45:33.186 "num_base_bdevs_discovered": 4, 00:45:33.186 "num_base_bdevs_operational": 4, 00:45:33.186 "base_bdevs_list": [ 00:45:33.186 { 00:45:33.186 "name": "spare", 00:45:33.186 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:33.186 "is_configured": true, 00:45:33.186 "data_offset": 2048, 00:45:33.186 "data_size": 63488 00:45:33.186 }, 00:45:33.186 { 00:45:33.186 "name": "BaseBdev2", 00:45:33.186 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:33.186 "is_configured": true, 00:45:33.186 "data_offset": 2048, 00:45:33.186 "data_size": 63488 00:45:33.187 }, 00:45:33.187 { 00:45:33.187 "name": "BaseBdev3", 00:45:33.187 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:33.187 "is_configured": true, 00:45:33.187 "data_offset": 2048, 00:45:33.187 "data_size": 63488 00:45:33.187 }, 00:45:33.187 { 00:45:33.187 "name": "BaseBdev4", 00:45:33.187 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:33.187 "is_configured": true, 00:45:33.187 "data_offset": 2048, 00:45:33.187 "data_size": 63488 00:45:33.187 } 00:45:33.187 ] 00:45:33.187 }' 00:45:33.187 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:33.187 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:33.187 05:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.187 [2024-12-09 05:36:20.065987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:33.187 "name": "raid_bdev1", 00:45:33.187 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:33.187 "strip_size_kb": 64, 00:45:33.187 "state": "online", 00:45:33.187 "raid_level": "raid5f", 00:45:33.187 "superblock": true, 00:45:33.187 "num_base_bdevs": 4, 00:45:33.187 "num_base_bdevs_discovered": 3, 00:45:33.187 "num_base_bdevs_operational": 3, 00:45:33.187 "base_bdevs_list": [ 00:45:33.187 { 00:45:33.187 "name": null, 00:45:33.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:33.187 "is_configured": false, 00:45:33.187 "data_offset": 0, 00:45:33.187 "data_size": 63488 00:45:33.187 }, 00:45:33.187 { 00:45:33.187 "name": "BaseBdev2", 00:45:33.187 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:33.187 "is_configured": true, 00:45:33.187 "data_offset": 2048, 00:45:33.187 "data_size": 63488 00:45:33.187 }, 00:45:33.187 { 00:45:33.187 "name": "BaseBdev3", 00:45:33.187 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:33.187 "is_configured": true, 00:45:33.187 "data_offset": 2048, 00:45:33.187 "data_size": 63488 00:45:33.187 }, 00:45:33.187 { 00:45:33.187 "name": "BaseBdev4", 00:45:33.187 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:33.187 "is_configured": true, 00:45:33.187 "data_offset": 2048, 00:45:33.187 "data_size": 63488 00:45:33.187 } 00:45:33.187 ] 00:45:33.187 }' 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:33.187 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.753 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:45:33.753 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.753 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:33.753 [2024-12-09 05:36:20.574356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:33.753 [2024-12-09 05:36:20.574646] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:45:33.753 [2024-12-09 05:36:20.574679] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:45:33.753 [2024-12-09 05:36:20.574729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:33.753 [2024-12-09 05:36:20.589677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:45:33.753 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.753 05:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:45:33.753 [2024-12-09 05:36:20.599865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:34.689 "name": "raid_bdev1", 00:45:34.689 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:34.689 "strip_size_kb": 64, 00:45:34.689 "state": "online", 00:45:34.689 "raid_level": "raid5f", 00:45:34.689 "superblock": true, 00:45:34.689 "num_base_bdevs": 4, 00:45:34.689 "num_base_bdevs_discovered": 4, 00:45:34.689 "num_base_bdevs_operational": 4, 00:45:34.689 "process": { 00:45:34.689 "type": "rebuild", 00:45:34.689 "target": "spare", 00:45:34.689 "progress": { 00:45:34.689 "blocks": 17280, 00:45:34.689 "percent": 9 00:45:34.689 } 00:45:34.689 }, 00:45:34.689 "base_bdevs_list": [ 00:45:34.689 { 00:45:34.689 "name": "spare", 00:45:34.689 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:34.689 "is_configured": true, 00:45:34.689 "data_offset": 2048, 00:45:34.689 "data_size": 63488 00:45:34.689 }, 00:45:34.689 { 00:45:34.689 "name": "BaseBdev2", 00:45:34.689 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:34.689 "is_configured": true, 00:45:34.689 "data_offset": 2048, 00:45:34.689 "data_size": 63488 00:45:34.689 }, 00:45:34.689 { 00:45:34.689 "name": "BaseBdev3", 00:45:34.689 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:34.689 "is_configured": true, 00:45:34.689 "data_offset": 2048, 00:45:34.689 "data_size": 63488 00:45:34.689 }, 00:45:34.689 { 00:45:34.689 "name": "BaseBdev4", 00:45:34.689 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:34.689 "is_configured": true, 00:45:34.689 "data_offset": 2048, 00:45:34.689 "data_size": 63488 00:45:34.689 } 00:45:34.689 ] 00:45:34.689 }' 00:45:34.689 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:34.948 [2024-12-09 05:36:21.753779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:34.948 [2024-12-09 05:36:21.811575] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:45:34.948 [2024-12-09 05:36:21.811659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:34.948 [2024-12-09 05:36:21.811686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:34.948 [2024-12-09 05:36:21.811704] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:34.948 "name": "raid_bdev1", 00:45:34.948 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:34.948 "strip_size_kb": 64, 00:45:34.948 "state": "online", 00:45:34.948 "raid_level": "raid5f", 00:45:34.948 "superblock": true, 00:45:34.948 "num_base_bdevs": 4, 00:45:34.948 "num_base_bdevs_discovered": 3, 00:45:34.948 "num_base_bdevs_operational": 3, 00:45:34.948 "base_bdevs_list": [ 00:45:34.948 { 00:45:34.948 "name": null, 00:45:34.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:34.948 "is_configured": false, 00:45:34.948 "data_offset": 0, 00:45:34.948 "data_size": 63488 00:45:34.948 }, 00:45:34.948 { 00:45:34.948 "name": "BaseBdev2", 00:45:34.948 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:34.948 "is_configured": true, 00:45:34.948 "data_offset": 2048, 00:45:34.948 "data_size": 63488 00:45:34.948 }, 00:45:34.948 { 00:45:34.948 "name": "BaseBdev3", 00:45:34.948 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:34.948 "is_configured": true, 00:45:34.948 "data_offset": 2048, 00:45:34.948 "data_size": 63488 00:45:34.948 }, 00:45:34.948 { 00:45:34.948 "name": "BaseBdev4", 00:45:34.948 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:34.948 "is_configured": true, 00:45:34.948 "data_offset": 2048, 00:45:34.948 "data_size": 63488 00:45:34.948 } 00:45:34.948 ] 00:45:34.948 }' 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:34.948 05:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:35.514 05:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:45:35.514 05:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.514 05:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:35.514 [2024-12-09 05:36:22.369160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:45:35.514 [2024-12-09 05:36:22.369259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:35.514 [2024-12-09 05:36:22.369299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:45:35.514 [2024-12-09 05:36:22.369320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:35.514 [2024-12-09 05:36:22.370036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:35.514 [2024-12-09 05:36:22.370075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:45:35.514 [2024-12-09 05:36:22.370212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:45:35.514 [2024-12-09 05:36:22.370238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:45:35.514 [2024-12-09 05:36:22.370252] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:45:35.514 [2024-12-09 05:36:22.370290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:35.514 [2024-12-09 05:36:22.385314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:45:35.514 spare 00:45:35.514 05:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.514 05:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:45:35.514 [2024-12-09 05:36:22.394974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:36.451 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:36.709 "name": "raid_bdev1", 00:45:36.709 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:36.709 "strip_size_kb": 64, 00:45:36.709 "state": "online", 00:45:36.709 "raid_level": "raid5f", 00:45:36.709 "superblock": true, 00:45:36.709 "num_base_bdevs": 4, 00:45:36.709 "num_base_bdevs_discovered": 4, 00:45:36.709 "num_base_bdevs_operational": 4, 00:45:36.709 "process": { 00:45:36.709 "type": "rebuild", 00:45:36.709 "target": "spare", 00:45:36.709 "progress": { 00:45:36.709 "blocks": 17280, 00:45:36.709 "percent": 9 00:45:36.709 } 00:45:36.709 }, 00:45:36.709 "base_bdevs_list": [ 00:45:36.709 { 00:45:36.709 "name": "spare", 00:45:36.709 "uuid": "1c1fbb4f-8c8e-5640-bbd9-c66e30e2dfd5", 00:45:36.709 "is_configured": true, 00:45:36.709 "data_offset": 2048, 00:45:36.709 "data_size": 63488 00:45:36.709 }, 00:45:36.709 { 00:45:36.709 "name": "BaseBdev2", 00:45:36.709 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:36.709 "is_configured": true, 00:45:36.709 "data_offset": 2048, 00:45:36.709 "data_size": 63488 00:45:36.709 }, 00:45:36.709 { 00:45:36.709 "name": "BaseBdev3", 00:45:36.709 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:36.709 "is_configured": true, 00:45:36.709 "data_offset": 2048, 00:45:36.709 "data_size": 63488 00:45:36.709 }, 00:45:36.709 { 00:45:36.709 "name": "BaseBdev4", 00:45:36.709 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:36.709 "is_configured": true, 00:45:36.709 "data_offset": 2048, 00:45:36.709 "data_size": 63488 00:45:36.709 } 00:45:36.709 ] 00:45:36.709 }' 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:36.709 [2024-12-09 05:36:23.577087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:36.709 [2024-12-09 05:36:23.607214] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:45:36.709 [2024-12-09 05:36:23.607284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:36.709 [2024-12-09 05:36:23.607315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:36.709 [2024-12-09 05:36:23.607326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:36.709 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.970 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:36.970 "name": "raid_bdev1", 00:45:36.970 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:36.970 "strip_size_kb": 64, 00:45:36.970 "state": "online", 00:45:36.970 "raid_level": "raid5f", 00:45:36.970 "superblock": true, 00:45:36.970 "num_base_bdevs": 4, 00:45:36.970 "num_base_bdevs_discovered": 3, 00:45:36.970 "num_base_bdevs_operational": 3, 00:45:36.970 "base_bdevs_list": [ 00:45:36.970 { 00:45:36.970 "name": null, 00:45:36.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:36.970 "is_configured": false, 00:45:36.970 "data_offset": 0, 00:45:36.970 "data_size": 63488 00:45:36.970 }, 00:45:36.970 { 00:45:36.970 "name": "BaseBdev2", 00:45:36.970 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:36.970 "is_configured": true, 00:45:36.970 "data_offset": 2048, 00:45:36.970 "data_size": 63488 00:45:36.970 }, 00:45:36.970 { 00:45:36.970 "name": "BaseBdev3", 00:45:36.970 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:36.970 "is_configured": true, 00:45:36.970 "data_offset": 2048, 00:45:36.970 "data_size": 63488 00:45:36.970 }, 00:45:36.970 { 00:45:36.970 "name": "BaseBdev4", 00:45:36.970 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:36.970 "is_configured": true, 00:45:36.970 "data_offset": 2048, 00:45:36.970 "data_size": 63488 00:45:36.970 } 00:45:36.970 ] 00:45:36.970 }' 00:45:36.970 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:36.970 05:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:37.231 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.489 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:37.489 "name": "raid_bdev1", 00:45:37.489 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:37.489 "strip_size_kb": 64, 00:45:37.489 "state": "online", 00:45:37.489 "raid_level": "raid5f", 00:45:37.489 "superblock": true, 00:45:37.489 "num_base_bdevs": 4, 00:45:37.489 "num_base_bdevs_discovered": 3, 00:45:37.489 "num_base_bdevs_operational": 3, 00:45:37.489 "base_bdevs_list": [ 00:45:37.489 { 00:45:37.489 "name": null, 00:45:37.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:37.489 "is_configured": false, 00:45:37.489 "data_offset": 0, 00:45:37.489 "data_size": 63488 00:45:37.489 }, 00:45:37.490 { 00:45:37.490 "name": "BaseBdev2", 00:45:37.490 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:37.490 "is_configured": true, 00:45:37.490 "data_offset": 2048, 00:45:37.490 "data_size": 63488 00:45:37.490 }, 00:45:37.490 { 00:45:37.490 "name": "BaseBdev3", 00:45:37.490 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:37.490 "is_configured": true, 00:45:37.490 "data_offset": 2048, 00:45:37.490 "data_size": 63488 00:45:37.490 }, 00:45:37.490 { 00:45:37.490 "name": "BaseBdev4", 00:45:37.490 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:37.490 "is_configured": true, 00:45:37.490 "data_offset": 2048, 00:45:37.490 "data_size": 63488 00:45:37.490 } 00:45:37.490 ] 00:45:37.490 }' 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:37.490 [2024-12-09 05:36:24.353636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:45:37.490 [2024-12-09 05:36:24.353700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:37.490 [2024-12-09 05:36:24.353735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:45:37.490 [2024-12-09 05:36:24.353751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:37.490 [2024-12-09 05:36:24.354418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:37.490 [2024-12-09 05:36:24.354450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:45:37.490 [2024-12-09 05:36:24.354583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:45:37.490 [2024-12-09 05:36:24.354605] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:45:37.490 [2024-12-09 05:36:24.354623] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:45:37.490 [2024-12-09 05:36:24.354636] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:45:37.490 BaseBdev1 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.490 05:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:38.423 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.681 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:38.681 "name": "raid_bdev1", 00:45:38.681 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:38.681 "strip_size_kb": 64, 00:45:38.681 "state": "online", 00:45:38.681 "raid_level": "raid5f", 00:45:38.681 "superblock": true, 00:45:38.681 "num_base_bdevs": 4, 00:45:38.681 "num_base_bdevs_discovered": 3, 00:45:38.681 "num_base_bdevs_operational": 3, 00:45:38.681 "base_bdevs_list": [ 00:45:38.681 { 00:45:38.681 "name": null, 00:45:38.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:38.681 "is_configured": false, 00:45:38.681 "data_offset": 0, 00:45:38.681 "data_size": 63488 00:45:38.681 }, 00:45:38.681 { 00:45:38.681 "name": "BaseBdev2", 00:45:38.681 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:38.681 "is_configured": true, 00:45:38.681 "data_offset": 2048, 00:45:38.681 "data_size": 63488 00:45:38.681 }, 00:45:38.681 { 00:45:38.681 "name": "BaseBdev3", 00:45:38.681 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:38.681 "is_configured": true, 00:45:38.681 "data_offset": 2048, 00:45:38.681 "data_size": 63488 00:45:38.681 }, 00:45:38.681 { 00:45:38.681 "name": "BaseBdev4", 00:45:38.681 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:38.681 "is_configured": true, 00:45:38.681 "data_offset": 2048, 00:45:38.681 "data_size": 63488 00:45:38.681 } 00:45:38.681 ] 00:45:38.681 }' 00:45:38.681 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:38.681 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:38.940 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:39.199 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:39.199 "name": "raid_bdev1", 00:45:39.199 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:39.199 "strip_size_kb": 64, 00:45:39.199 "state": "online", 00:45:39.199 "raid_level": "raid5f", 00:45:39.199 "superblock": true, 00:45:39.199 "num_base_bdevs": 4, 00:45:39.199 "num_base_bdevs_discovered": 3, 00:45:39.199 "num_base_bdevs_operational": 3, 00:45:39.199 "base_bdevs_list": [ 00:45:39.199 { 00:45:39.199 "name": null, 00:45:39.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:39.199 "is_configured": false, 00:45:39.199 "data_offset": 0, 00:45:39.199 "data_size": 63488 00:45:39.199 }, 00:45:39.199 { 00:45:39.199 "name": "BaseBdev2", 00:45:39.199 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:39.199 "is_configured": true, 00:45:39.199 "data_offset": 2048, 00:45:39.199 "data_size": 63488 00:45:39.199 }, 00:45:39.199 { 00:45:39.199 "name": "BaseBdev3", 00:45:39.199 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:39.199 "is_configured": true, 00:45:39.199 "data_offset": 2048, 00:45:39.199 "data_size": 63488 00:45:39.199 }, 00:45:39.199 { 00:45:39.199 "name": "BaseBdev4", 00:45:39.199 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:39.199 "is_configured": true, 00:45:39.199 "data_offset": 2048, 00:45:39.199 "data_size": 63488 00:45:39.199 } 00:45:39.199 ] 00:45:39.199 }' 00:45:39.199 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:39.199 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:39.199 05:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:39.199 [2024-12-09 05:36:26.054755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:39.199 [2024-12-09 05:36:26.055015] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:45:39.199 [2024-12-09 05:36:26.055039] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:45:39.199 request: 00:45:39.199 { 00:45:39.199 "base_bdev": "BaseBdev1", 00:45:39.199 "raid_bdev": "raid_bdev1", 00:45:39.199 "method": "bdev_raid_add_base_bdev", 00:45:39.199 "req_id": 1 00:45:39.199 } 00:45:39.199 Got JSON-RPC error response 00:45:39.199 response: 00:45:39.199 { 00:45:39.199 "code": -22, 00:45:39.199 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:45:39.199 } 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:39.199 05:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:40.134 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:40.391 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:40.391 "name": "raid_bdev1", 00:45:40.392 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:40.392 "strip_size_kb": 64, 00:45:40.392 "state": "online", 00:45:40.392 "raid_level": "raid5f", 00:45:40.392 "superblock": true, 00:45:40.392 "num_base_bdevs": 4, 00:45:40.392 "num_base_bdevs_discovered": 3, 00:45:40.392 "num_base_bdevs_operational": 3, 00:45:40.392 "base_bdevs_list": [ 00:45:40.392 { 00:45:40.392 "name": null, 00:45:40.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:40.392 "is_configured": false, 00:45:40.392 "data_offset": 0, 00:45:40.392 "data_size": 63488 00:45:40.392 }, 00:45:40.392 { 00:45:40.392 "name": "BaseBdev2", 00:45:40.392 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:40.392 "is_configured": true, 00:45:40.392 "data_offset": 2048, 00:45:40.392 "data_size": 63488 00:45:40.392 }, 00:45:40.392 { 00:45:40.392 "name": "BaseBdev3", 00:45:40.392 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:40.392 "is_configured": true, 00:45:40.392 "data_offset": 2048, 00:45:40.392 "data_size": 63488 00:45:40.392 }, 00:45:40.392 { 00:45:40.392 "name": "BaseBdev4", 00:45:40.392 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:40.392 "is_configured": true, 00:45:40.392 "data_offset": 2048, 00:45:40.392 "data_size": 63488 00:45:40.392 } 00:45:40.392 ] 00:45:40.392 }' 00:45:40.392 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:40.392 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:40.649 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:45:40.908 "name": "raid_bdev1", 00:45:40.908 "uuid": "43ce1f99-b665-4243-b6da-f3729543d2e7", 00:45:40.908 "strip_size_kb": 64, 00:45:40.908 "state": "online", 00:45:40.908 "raid_level": "raid5f", 00:45:40.908 "superblock": true, 00:45:40.908 "num_base_bdevs": 4, 00:45:40.908 "num_base_bdevs_discovered": 3, 00:45:40.908 "num_base_bdevs_operational": 3, 00:45:40.908 "base_bdevs_list": [ 00:45:40.908 { 00:45:40.908 "name": null, 00:45:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:40.908 "is_configured": false, 00:45:40.908 "data_offset": 0, 00:45:40.908 "data_size": 63488 00:45:40.908 }, 00:45:40.908 { 00:45:40.908 "name": "BaseBdev2", 00:45:40.908 "uuid": "ca9e31d8-2bfc-5de4-ac37-2ba829398ce2", 00:45:40.908 "is_configured": true, 00:45:40.908 "data_offset": 2048, 00:45:40.908 "data_size": 63488 00:45:40.908 }, 00:45:40.908 { 00:45:40.908 "name": "BaseBdev3", 00:45:40.908 "uuid": "0fe5e86e-6869-5254-87df-2497c7b0f8fc", 00:45:40.908 "is_configured": true, 00:45:40.908 "data_offset": 2048, 00:45:40.908 "data_size": 63488 00:45:40.908 }, 00:45:40.908 { 00:45:40.908 "name": "BaseBdev4", 00:45:40.908 "uuid": "45c95e6f-af25-5d06-bcca-be1696aba049", 00:45:40.908 "is_configured": true, 00:45:40.908 "data_offset": 2048, 00:45:40.908 "data_size": 63488 00:45:40.908 } 00:45:40.908 ] 00:45:40.908 }' 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85640 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85640 ']' 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85640 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85640 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:40.908 killing process with pid 85640 00:45:40.908 Received shutdown signal, test time was about 60.000000 seconds 00:45:40.908 00:45:40.908 Latency(us) 00:45:40.908 [2024-12-09T05:36:27.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:40.908 [2024-12-09T05:36:27.880Z] =================================================================================================================== 00:45:40.908 [2024-12-09T05:36:27.880Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85640' 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85640 00:45:40.908 05:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85640 00:45:40.908 [2024-12-09 05:36:27.785680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:45:40.908 [2024-12-09 05:36:27.785883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:40.908 [2024-12-09 05:36:27.786001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:40.908 [2024-12-09 05:36:27.786024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:45:41.475 [2024-12-09 05:36:28.277829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:45:42.849 05:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:45:42.849 00:45:42.849 real 0m29.104s 00:45:42.849 user 0m37.813s 00:45:42.849 sys 0m2.996s 00:45:42.849 05:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:42.849 05:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:45:42.849 ************************************ 00:45:42.849 END TEST raid5f_rebuild_test_sb 00:45:42.849 ************************************ 00:45:42.849 05:36:29 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:45:42.849 05:36:29 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:45:42.849 05:36:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:45:42.849 05:36:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:42.849 05:36:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:45:42.849 ************************************ 00:45:42.849 START TEST raid_state_function_test_sb_4k 00:45:42.849 ************************************ 00:45:42.849 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:45:42.849 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:45:42.849 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:45:42.849 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:45:42.849 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86468 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:45:42.850 Process raid pid: 86468 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86468' 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86468 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86468 ']' 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:42.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:42.850 05:36:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:42.850 [2024-12-09 05:36:29.774964] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:42.850 [2024-12-09 05:36:29.775160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:43.107 [2024-12-09 05:36:29.968101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:43.365 [2024-12-09 05:36:30.121461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:43.623 [2024-12-09 05:36:30.364548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:43.623 [2024-12-09 05:36:30.364631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:43.881 [2024-12-09 05:36:30.719871] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:45:43.881 [2024-12-09 05:36:30.719945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:45:43.881 [2024-12-09 05:36:30.719962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:45:43.881 [2024-12-09 05:36:30.719980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:43.881 "name": "Existed_Raid", 00:45:43.881 "uuid": "c0b8547d-1083-4f2c-a5e6-43ed304f9b72", 00:45:43.881 "strip_size_kb": 0, 00:45:43.881 "state": "configuring", 00:45:43.881 "raid_level": "raid1", 00:45:43.881 "superblock": true, 00:45:43.881 "num_base_bdevs": 2, 00:45:43.881 "num_base_bdevs_discovered": 0, 00:45:43.881 "num_base_bdevs_operational": 2, 00:45:43.881 "base_bdevs_list": [ 00:45:43.881 { 00:45:43.881 "name": "BaseBdev1", 00:45:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:43.881 "is_configured": false, 00:45:43.881 "data_offset": 0, 00:45:43.881 "data_size": 0 00:45:43.881 }, 00:45:43.881 { 00:45:43.881 "name": "BaseBdev2", 00:45:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:43.881 "is_configured": false, 00:45:43.881 "data_offset": 0, 00:45:43.881 "data_size": 0 00:45:43.881 } 00:45:43.881 ] 00:45:43.881 }' 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:43.881 05:36:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 [2024-12-09 05:36:31.236093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:45:44.446 [2024-12-09 05:36:31.236141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 [2024-12-09 05:36:31.244057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:45:44.446 [2024-12-09 05:36:31.244112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:45:44.446 [2024-12-09 05:36:31.244129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:45:44.446 [2024-12-09 05:36:31.244154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 [2024-12-09 05:36:31.296163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:44.446 BaseBdev1 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 [ 00:45:44.446 { 00:45:44.446 "name": "BaseBdev1", 00:45:44.446 "aliases": [ 00:45:44.446 "e24ee5cb-dfbe-487c-b5e4-a403011a4d4a" 00:45:44.446 ], 00:45:44.446 "product_name": "Malloc disk", 00:45:44.446 "block_size": 4096, 00:45:44.446 "num_blocks": 8192, 00:45:44.446 "uuid": "e24ee5cb-dfbe-487c-b5e4-a403011a4d4a", 00:45:44.446 "assigned_rate_limits": { 00:45:44.446 "rw_ios_per_sec": 0, 00:45:44.446 "rw_mbytes_per_sec": 0, 00:45:44.446 "r_mbytes_per_sec": 0, 00:45:44.446 "w_mbytes_per_sec": 0 00:45:44.446 }, 00:45:44.446 "claimed": true, 00:45:44.446 "claim_type": "exclusive_write", 00:45:44.446 "zoned": false, 00:45:44.446 "supported_io_types": { 00:45:44.446 "read": true, 00:45:44.446 "write": true, 00:45:44.446 "unmap": true, 00:45:44.446 "flush": true, 00:45:44.446 "reset": true, 00:45:44.446 "nvme_admin": false, 00:45:44.446 "nvme_io": false, 00:45:44.446 "nvme_io_md": false, 00:45:44.446 "write_zeroes": true, 00:45:44.446 "zcopy": true, 00:45:44.446 "get_zone_info": false, 00:45:44.446 "zone_management": false, 00:45:44.446 "zone_append": false, 00:45:44.446 "compare": false, 00:45:44.446 "compare_and_write": false, 00:45:44.446 "abort": true, 00:45:44.446 "seek_hole": false, 00:45:44.446 "seek_data": false, 00:45:44.446 "copy": true, 00:45:44.446 "nvme_iov_md": false 00:45:44.446 }, 00:45:44.446 "memory_domains": [ 00:45:44.446 { 00:45:44.446 "dma_device_id": "system", 00:45:44.446 "dma_device_type": 1 00:45:44.446 }, 00:45:44.446 { 00:45:44.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:44.446 "dma_device_type": 2 00:45:44.446 } 00:45:44.446 ], 00:45:44.446 "driver_specific": {} 00:45:44.446 } 00:45:44.446 ] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:44.446 "name": "Existed_Raid", 00:45:44.446 "uuid": "b69af1e5-7b78-493b-bab5-2fac7666f476", 00:45:44.446 "strip_size_kb": 0, 00:45:44.446 "state": "configuring", 00:45:44.446 "raid_level": "raid1", 00:45:44.446 "superblock": true, 00:45:44.446 "num_base_bdevs": 2, 00:45:44.446 "num_base_bdevs_discovered": 1, 00:45:44.446 "num_base_bdevs_operational": 2, 00:45:44.446 "base_bdevs_list": [ 00:45:44.446 { 00:45:44.446 "name": "BaseBdev1", 00:45:44.446 "uuid": "e24ee5cb-dfbe-487c-b5e4-a403011a4d4a", 00:45:44.446 "is_configured": true, 00:45:44.446 "data_offset": 256, 00:45:44.446 "data_size": 7936 00:45:44.446 }, 00:45:44.446 { 00:45:44.446 "name": "BaseBdev2", 00:45:44.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:44.446 "is_configured": false, 00:45:44.446 "data_offset": 0, 00:45:44.446 "data_size": 0 00:45:44.446 } 00:45:44.446 ] 00:45:44.446 }' 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:44.446 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.011 [2024-12-09 05:36:31.860401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:45:45.011 [2024-12-09 05:36:31.860480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.011 [2024-12-09 05:36:31.868546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:45.011 [2024-12-09 05:36:31.871548] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:45:45.011 [2024-12-09 05:36:31.871805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:45.011 "name": "Existed_Raid", 00:45:45.011 "uuid": "bbc4aa40-709b-4af6-9388-43acb4eeadcd", 00:45:45.011 "strip_size_kb": 0, 00:45:45.011 "state": "configuring", 00:45:45.011 "raid_level": "raid1", 00:45:45.011 "superblock": true, 00:45:45.011 "num_base_bdevs": 2, 00:45:45.011 "num_base_bdevs_discovered": 1, 00:45:45.011 "num_base_bdevs_operational": 2, 00:45:45.011 "base_bdevs_list": [ 00:45:45.011 { 00:45:45.011 "name": "BaseBdev1", 00:45:45.011 "uuid": "e24ee5cb-dfbe-487c-b5e4-a403011a4d4a", 00:45:45.011 "is_configured": true, 00:45:45.011 "data_offset": 256, 00:45:45.011 "data_size": 7936 00:45:45.011 }, 00:45:45.011 { 00:45:45.011 "name": "BaseBdev2", 00:45:45.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:45.011 "is_configured": false, 00:45:45.011 "data_offset": 0, 00:45:45.011 "data_size": 0 00:45:45.011 } 00:45:45.011 ] 00:45:45.011 }' 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:45.011 05:36:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.577 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:45:45.577 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.578 [2024-12-09 05:36:32.437105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:45:45.578 [2024-12-09 05:36:32.437664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:45:45.578 [2024-12-09 05:36:32.437690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:45:45.578 [2024-12-09 05:36:32.438045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:45:45.578 BaseBdev2 00:45:45.578 [2024-12-09 05:36:32.438303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:45:45.578 [2024-12-09 05:36:32.438331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:45:45.578 [2024-12-09 05:36:32.438532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.578 [ 00:45:45.578 { 00:45:45.578 "name": "BaseBdev2", 00:45:45.578 "aliases": [ 00:45:45.578 "61a43dae-c32b-40e1-a5a0-b7ebdff8da33" 00:45:45.578 ], 00:45:45.578 "product_name": "Malloc disk", 00:45:45.578 "block_size": 4096, 00:45:45.578 "num_blocks": 8192, 00:45:45.578 "uuid": "61a43dae-c32b-40e1-a5a0-b7ebdff8da33", 00:45:45.578 "assigned_rate_limits": { 00:45:45.578 "rw_ios_per_sec": 0, 00:45:45.578 "rw_mbytes_per_sec": 0, 00:45:45.578 "r_mbytes_per_sec": 0, 00:45:45.578 "w_mbytes_per_sec": 0 00:45:45.578 }, 00:45:45.578 "claimed": true, 00:45:45.578 "claim_type": "exclusive_write", 00:45:45.578 "zoned": false, 00:45:45.578 "supported_io_types": { 00:45:45.578 "read": true, 00:45:45.578 "write": true, 00:45:45.578 "unmap": true, 00:45:45.578 "flush": true, 00:45:45.578 "reset": true, 00:45:45.578 "nvme_admin": false, 00:45:45.578 "nvme_io": false, 00:45:45.578 "nvme_io_md": false, 00:45:45.578 "write_zeroes": true, 00:45:45.578 "zcopy": true, 00:45:45.578 "get_zone_info": false, 00:45:45.578 "zone_management": false, 00:45:45.578 "zone_append": false, 00:45:45.578 "compare": false, 00:45:45.578 "compare_and_write": false, 00:45:45.578 "abort": true, 00:45:45.578 "seek_hole": false, 00:45:45.578 "seek_data": false, 00:45:45.578 "copy": true, 00:45:45.578 "nvme_iov_md": false 00:45:45.578 }, 00:45:45.578 "memory_domains": [ 00:45:45.578 { 00:45:45.578 "dma_device_id": "system", 00:45:45.578 "dma_device_type": 1 00:45:45.578 }, 00:45:45.578 { 00:45:45.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:45.578 "dma_device_type": 2 00:45:45.578 } 00:45:45.578 ], 00:45:45.578 "driver_specific": {} 00:45:45.578 } 00:45:45.578 ] 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:45.578 "name": "Existed_Raid", 00:45:45.578 "uuid": "bbc4aa40-709b-4af6-9388-43acb4eeadcd", 00:45:45.578 "strip_size_kb": 0, 00:45:45.578 "state": "online", 00:45:45.578 "raid_level": "raid1", 00:45:45.578 "superblock": true, 00:45:45.578 "num_base_bdevs": 2, 00:45:45.578 "num_base_bdevs_discovered": 2, 00:45:45.578 "num_base_bdevs_operational": 2, 00:45:45.578 "base_bdevs_list": [ 00:45:45.578 { 00:45:45.578 "name": "BaseBdev1", 00:45:45.578 "uuid": "e24ee5cb-dfbe-487c-b5e4-a403011a4d4a", 00:45:45.578 "is_configured": true, 00:45:45.578 "data_offset": 256, 00:45:45.578 "data_size": 7936 00:45:45.578 }, 00:45:45.578 { 00:45:45.578 "name": "BaseBdev2", 00:45:45.578 "uuid": "61a43dae-c32b-40e1-a5a0-b7ebdff8da33", 00:45:45.578 "is_configured": true, 00:45:45.578 "data_offset": 256, 00:45:45.578 "data_size": 7936 00:45:45.578 } 00:45:45.578 ] 00:45:45.578 }' 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:45.578 05:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.144 [2024-12-09 05:36:33.021838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:45:46.144 "name": "Existed_Raid", 00:45:46.144 "aliases": [ 00:45:46.144 "bbc4aa40-709b-4af6-9388-43acb4eeadcd" 00:45:46.144 ], 00:45:46.144 "product_name": "Raid Volume", 00:45:46.144 "block_size": 4096, 00:45:46.144 "num_blocks": 7936, 00:45:46.144 "uuid": "bbc4aa40-709b-4af6-9388-43acb4eeadcd", 00:45:46.144 "assigned_rate_limits": { 00:45:46.144 "rw_ios_per_sec": 0, 00:45:46.144 "rw_mbytes_per_sec": 0, 00:45:46.144 "r_mbytes_per_sec": 0, 00:45:46.144 "w_mbytes_per_sec": 0 00:45:46.144 }, 00:45:46.144 "claimed": false, 00:45:46.144 "zoned": false, 00:45:46.144 "supported_io_types": { 00:45:46.144 "read": true, 00:45:46.144 "write": true, 00:45:46.144 "unmap": false, 00:45:46.144 "flush": false, 00:45:46.144 "reset": true, 00:45:46.144 "nvme_admin": false, 00:45:46.144 "nvme_io": false, 00:45:46.144 "nvme_io_md": false, 00:45:46.144 "write_zeroes": true, 00:45:46.144 "zcopy": false, 00:45:46.144 "get_zone_info": false, 00:45:46.144 "zone_management": false, 00:45:46.144 "zone_append": false, 00:45:46.144 "compare": false, 00:45:46.144 "compare_and_write": false, 00:45:46.144 "abort": false, 00:45:46.144 "seek_hole": false, 00:45:46.144 "seek_data": false, 00:45:46.144 "copy": false, 00:45:46.144 "nvme_iov_md": false 00:45:46.144 }, 00:45:46.144 "memory_domains": [ 00:45:46.144 { 00:45:46.144 "dma_device_id": "system", 00:45:46.144 "dma_device_type": 1 00:45:46.144 }, 00:45:46.144 { 00:45:46.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:46.144 "dma_device_type": 2 00:45:46.144 }, 00:45:46.144 { 00:45:46.144 "dma_device_id": "system", 00:45:46.144 "dma_device_type": 1 00:45:46.144 }, 00:45:46.144 { 00:45:46.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:46.144 "dma_device_type": 2 00:45:46.144 } 00:45:46.144 ], 00:45:46.144 "driver_specific": { 00:45:46.144 "raid": { 00:45:46.144 "uuid": "bbc4aa40-709b-4af6-9388-43acb4eeadcd", 00:45:46.144 "strip_size_kb": 0, 00:45:46.144 "state": "online", 00:45:46.144 "raid_level": "raid1", 00:45:46.144 "superblock": true, 00:45:46.144 "num_base_bdevs": 2, 00:45:46.144 "num_base_bdevs_discovered": 2, 00:45:46.144 "num_base_bdevs_operational": 2, 00:45:46.144 "base_bdevs_list": [ 00:45:46.144 { 00:45:46.144 "name": "BaseBdev1", 00:45:46.144 "uuid": "e24ee5cb-dfbe-487c-b5e4-a403011a4d4a", 00:45:46.144 "is_configured": true, 00:45:46.144 "data_offset": 256, 00:45:46.144 "data_size": 7936 00:45:46.144 }, 00:45:46.144 { 00:45:46.144 "name": "BaseBdev2", 00:45:46.144 "uuid": "61a43dae-c32b-40e1-a5a0-b7ebdff8da33", 00:45:46.144 "is_configured": true, 00:45:46.144 "data_offset": 256, 00:45:46.144 "data_size": 7936 00:45:46.144 } 00:45:46.144 ] 00:45:46.144 } 00:45:46.144 } 00:45:46.144 }' 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:45:46.144 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:45:46.144 BaseBdev2' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.402 [2024-12-09 05:36:33.277528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:46.402 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:46.660 "name": "Existed_Raid", 00:45:46.660 "uuid": "bbc4aa40-709b-4af6-9388-43acb4eeadcd", 00:45:46.660 "strip_size_kb": 0, 00:45:46.660 "state": "online", 00:45:46.660 "raid_level": "raid1", 00:45:46.660 "superblock": true, 00:45:46.660 "num_base_bdevs": 2, 00:45:46.660 "num_base_bdevs_discovered": 1, 00:45:46.660 "num_base_bdevs_operational": 1, 00:45:46.660 "base_bdevs_list": [ 00:45:46.660 { 00:45:46.660 "name": null, 00:45:46.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:46.660 "is_configured": false, 00:45:46.660 "data_offset": 0, 00:45:46.660 "data_size": 7936 00:45:46.660 }, 00:45:46.660 { 00:45:46.660 "name": "BaseBdev2", 00:45:46.660 "uuid": "61a43dae-c32b-40e1-a5a0-b7ebdff8da33", 00:45:46.660 "is_configured": true, 00:45:46.660 "data_offset": 256, 00:45:46.660 "data_size": 7936 00:45:46.660 } 00:45:46.660 ] 00:45:46.660 }' 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:46.660 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:46.918 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:47.176 05:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:47.176 [2024-12-09 05:36:33.946035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:45:47.176 [2024-12-09 05:36:33.946223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:47.176 [2024-12-09 05:36:34.041185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:47.176 [2024-12-09 05:36:34.041278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:47.176 [2024-12-09 05:36:34.041300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86468 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86468 ']' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86468 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86468 00:45:47.176 killing process with pid 86468 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86468' 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86468 00:45:47.176 [2024-12-09 05:36:34.128739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:45:47.176 05:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86468 00:45:47.176 [2024-12-09 05:36:34.144936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:45:48.550 ************************************ 00:45:48.550 END TEST raid_state_function_test_sb_4k 00:45:48.550 ************************************ 00:45:48.550 05:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:45:48.550 00:45:48.550 real 0m5.747s 00:45:48.550 user 0m8.540s 00:45:48.550 sys 0m0.833s 00:45:48.550 05:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:48.550 05:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:48.550 05:36:35 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:45:48.550 05:36:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:48.550 05:36:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:48.550 05:36:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:45:48.550 ************************************ 00:45:48.550 START TEST raid_superblock_test_4k 00:45:48.550 ************************************ 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:45:48.550 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86720 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86720 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86720 ']' 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:48.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:48.551 05:36:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:48.809 [2024-12-09 05:36:35.553733] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:48.809 [2024-12-09 05:36:35.554241] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86720 ] 00:45:48.809 [2024-12-09 05:36:35.738340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:49.067 [2024-12-09 05:36:35.895969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:49.325 [2024-12-09 05:36:36.101501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:49.325 [2024-12-09 05:36:36.101542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:49.917 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:49.917 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:45:49.917 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:45:49.917 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:45:49.917 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:45:49.917 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:49.918 malloc1 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:49.918 [2024-12-09 05:36:36.712739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:45:49.918 [2024-12-09 05:36:36.712846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:49.918 [2024-12-09 05:36:36.712878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:45:49.918 [2024-12-09 05:36:36.712892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:49.918 [2024-12-09 05:36:36.715581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:49.918 [2024-12-09 05:36:36.715907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:45:49.918 pt1 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:49.918 malloc2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:49.918 [2024-12-09 05:36:36.760717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:49.918 [2024-12-09 05:36:36.760819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:49.918 [2024-12-09 05:36:36.760856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:45:49.918 [2024-12-09 05:36:36.760871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:49.918 [2024-12-09 05:36:36.763510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:49.918 [2024-12-09 05:36:36.763550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:49.918 pt2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:49.918 [2024-12-09 05:36:36.768788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:45:49.918 [2024-12-09 05:36:36.771268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:49.918 [2024-12-09 05:36:36.771637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:45:49.918 [2024-12-09 05:36:36.771795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:45:49.918 [2024-12-09 05:36:36.772162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:45:49.918 [2024-12-09 05:36:36.772491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:45:49.918 [2024-12-09 05:36:36.772652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:45:49.918 [2024-12-09 05:36:36.773114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:49.918 "name": "raid_bdev1", 00:45:49.918 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:49.918 "strip_size_kb": 0, 00:45:49.918 "state": "online", 00:45:49.918 "raid_level": "raid1", 00:45:49.918 "superblock": true, 00:45:49.918 "num_base_bdevs": 2, 00:45:49.918 "num_base_bdevs_discovered": 2, 00:45:49.918 "num_base_bdevs_operational": 2, 00:45:49.918 "base_bdevs_list": [ 00:45:49.918 { 00:45:49.918 "name": "pt1", 00:45:49.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:45:49.918 "is_configured": true, 00:45:49.918 "data_offset": 256, 00:45:49.918 "data_size": 7936 00:45:49.918 }, 00:45:49.918 { 00:45:49.918 "name": "pt2", 00:45:49.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:49.918 "is_configured": true, 00:45:49.918 "data_offset": 256, 00:45:49.918 "data_size": 7936 00:45:49.918 } 00:45:49.918 ] 00:45:49.918 }' 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:49.918 05:36:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:45:50.486 [2024-12-09 05:36:37.297595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.486 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:45:50.486 "name": "raid_bdev1", 00:45:50.486 "aliases": [ 00:45:50.486 "5c32310a-25d0-481a-8211-4ce96bda72e1" 00:45:50.486 ], 00:45:50.486 "product_name": "Raid Volume", 00:45:50.486 "block_size": 4096, 00:45:50.486 "num_blocks": 7936, 00:45:50.486 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:50.486 "assigned_rate_limits": { 00:45:50.486 "rw_ios_per_sec": 0, 00:45:50.486 "rw_mbytes_per_sec": 0, 00:45:50.486 "r_mbytes_per_sec": 0, 00:45:50.486 "w_mbytes_per_sec": 0 00:45:50.486 }, 00:45:50.486 "claimed": false, 00:45:50.486 "zoned": false, 00:45:50.486 "supported_io_types": { 00:45:50.486 "read": true, 00:45:50.486 "write": true, 00:45:50.486 "unmap": false, 00:45:50.486 "flush": false, 00:45:50.486 "reset": true, 00:45:50.486 "nvme_admin": false, 00:45:50.486 "nvme_io": false, 00:45:50.486 "nvme_io_md": false, 00:45:50.486 "write_zeroes": true, 00:45:50.486 "zcopy": false, 00:45:50.486 "get_zone_info": false, 00:45:50.486 "zone_management": false, 00:45:50.486 "zone_append": false, 00:45:50.486 "compare": false, 00:45:50.486 "compare_and_write": false, 00:45:50.486 "abort": false, 00:45:50.486 "seek_hole": false, 00:45:50.486 "seek_data": false, 00:45:50.486 "copy": false, 00:45:50.486 "nvme_iov_md": false 00:45:50.486 }, 00:45:50.487 "memory_domains": [ 00:45:50.487 { 00:45:50.487 "dma_device_id": "system", 00:45:50.487 "dma_device_type": 1 00:45:50.487 }, 00:45:50.487 { 00:45:50.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:50.487 "dma_device_type": 2 00:45:50.487 }, 00:45:50.487 { 00:45:50.487 "dma_device_id": "system", 00:45:50.487 "dma_device_type": 1 00:45:50.487 }, 00:45:50.487 { 00:45:50.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:50.487 "dma_device_type": 2 00:45:50.487 } 00:45:50.487 ], 00:45:50.487 "driver_specific": { 00:45:50.487 "raid": { 00:45:50.487 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:50.487 "strip_size_kb": 0, 00:45:50.487 "state": "online", 00:45:50.487 "raid_level": "raid1", 00:45:50.487 "superblock": true, 00:45:50.487 "num_base_bdevs": 2, 00:45:50.487 "num_base_bdevs_discovered": 2, 00:45:50.487 "num_base_bdevs_operational": 2, 00:45:50.487 "base_bdevs_list": [ 00:45:50.487 { 00:45:50.487 "name": "pt1", 00:45:50.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:45:50.487 "is_configured": true, 00:45:50.487 "data_offset": 256, 00:45:50.487 "data_size": 7936 00:45:50.487 }, 00:45:50.487 { 00:45:50.487 "name": "pt2", 00:45:50.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:50.487 "is_configured": true, 00:45:50.487 "data_offset": 256, 00:45:50.487 "data_size": 7936 00:45:50.487 } 00:45:50.487 ] 00:45:50.487 } 00:45:50.487 } 00:45:50.487 }' 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:45:50.487 pt2' 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.487 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.746 [2024-12-09 05:36:37.565717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5c32310a-25d0-481a-8211-4ce96bda72e1 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5c32310a-25d0-481a-8211-4ce96bda72e1 ']' 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.746 [2024-12-09 05:36:37.613358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:50.746 [2024-12-09 05:36:37.613386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:50.746 [2024-12-09 05:36:37.613485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:50.746 [2024-12-09 05:36:37.613557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:50.746 [2024-12-09 05:36:37.613575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.746 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:50.747 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.006 [2024-12-09 05:36:37.749421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:45:51.006 [2024-12-09 05:36:37.752294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:45:51.006 [2024-12-09 05:36:37.752391] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:45:51.006 [2024-12-09 05:36:37.752485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:45:51.006 [2024-12-09 05:36:37.752511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:51.006 [2024-12-09 05:36:37.752525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:45:51.006 request: 00:45:51.006 { 00:45:51.006 "name": "raid_bdev1", 00:45:51.006 "raid_level": "raid1", 00:45:51.006 "base_bdevs": [ 00:45:51.006 "malloc1", 00:45:51.006 "malloc2" 00:45:51.006 ], 00:45:51.006 "superblock": false, 00:45:51.006 "method": "bdev_raid_create", 00:45:51.006 "req_id": 1 00:45:51.006 } 00:45:51.006 Got JSON-RPC error response 00:45:51.006 response: 00:45:51.006 { 00:45:51.006 "code": -17, 00:45:51.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:45:51.006 } 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:45:51.006 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.007 [2024-12-09 05:36:37.821445] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:45:51.007 [2024-12-09 05:36:37.821660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:51.007 [2024-12-09 05:36:37.821729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:45:51.007 [2024-12-09 05:36:37.821942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:51.007 [2024-12-09 05:36:37.825113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:51.007 [2024-12-09 05:36:37.825320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:45:51.007 [2024-12-09 05:36:37.825448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:45:51.007 [2024-12-09 05:36:37.825526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:45:51.007 pt1 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:51.007 "name": "raid_bdev1", 00:45:51.007 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:51.007 "strip_size_kb": 0, 00:45:51.007 "state": "configuring", 00:45:51.007 "raid_level": "raid1", 00:45:51.007 "superblock": true, 00:45:51.007 "num_base_bdevs": 2, 00:45:51.007 "num_base_bdevs_discovered": 1, 00:45:51.007 "num_base_bdevs_operational": 2, 00:45:51.007 "base_bdevs_list": [ 00:45:51.007 { 00:45:51.007 "name": "pt1", 00:45:51.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:45:51.007 "is_configured": true, 00:45:51.007 "data_offset": 256, 00:45:51.007 "data_size": 7936 00:45:51.007 }, 00:45:51.007 { 00:45:51.007 "name": null, 00:45:51.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:51.007 "is_configured": false, 00:45:51.007 "data_offset": 256, 00:45:51.007 "data_size": 7936 00:45:51.007 } 00:45:51.007 ] 00:45:51.007 }' 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:51.007 05:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.574 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:45:51.574 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:45:51.574 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.575 [2024-12-09 05:36:38.349693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:51.575 [2024-12-09 05:36:38.349835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:51.575 [2024-12-09 05:36:38.349872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:45:51.575 [2024-12-09 05:36:38.349892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:51.575 [2024-12-09 05:36:38.350641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:51.575 [2024-12-09 05:36:38.350680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:51.575 [2024-12-09 05:36:38.350810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:45:51.575 [2024-12-09 05:36:38.350856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:51.575 [2024-12-09 05:36:38.351017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:45:51.575 [2024-12-09 05:36:38.351047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:45:51.575 [2024-12-09 05:36:38.351427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:45:51.575 [2024-12-09 05:36:38.351621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:45:51.575 [2024-12-09 05:36:38.351637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:45:51.575 [2024-12-09 05:36:38.351830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:51.575 pt2 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:51.575 "name": "raid_bdev1", 00:45:51.575 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:51.575 "strip_size_kb": 0, 00:45:51.575 "state": "online", 00:45:51.575 "raid_level": "raid1", 00:45:51.575 "superblock": true, 00:45:51.575 "num_base_bdevs": 2, 00:45:51.575 "num_base_bdevs_discovered": 2, 00:45:51.575 "num_base_bdevs_operational": 2, 00:45:51.575 "base_bdevs_list": [ 00:45:51.575 { 00:45:51.575 "name": "pt1", 00:45:51.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:45:51.575 "is_configured": true, 00:45:51.575 "data_offset": 256, 00:45:51.575 "data_size": 7936 00:45:51.575 }, 00:45:51.575 { 00:45:51.575 "name": "pt2", 00:45:51.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:51.575 "is_configured": true, 00:45:51.575 "data_offset": 256, 00:45:51.575 "data_size": 7936 00:45:51.575 } 00:45:51.575 ] 00:45:51.575 }' 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:51.575 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.143 [2024-12-09 05:36:38.894130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.143 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:45:52.143 "name": "raid_bdev1", 00:45:52.143 "aliases": [ 00:45:52.143 "5c32310a-25d0-481a-8211-4ce96bda72e1" 00:45:52.143 ], 00:45:52.143 "product_name": "Raid Volume", 00:45:52.143 "block_size": 4096, 00:45:52.143 "num_blocks": 7936, 00:45:52.143 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:52.143 "assigned_rate_limits": { 00:45:52.143 "rw_ios_per_sec": 0, 00:45:52.143 "rw_mbytes_per_sec": 0, 00:45:52.143 "r_mbytes_per_sec": 0, 00:45:52.143 "w_mbytes_per_sec": 0 00:45:52.143 }, 00:45:52.143 "claimed": false, 00:45:52.143 "zoned": false, 00:45:52.143 "supported_io_types": { 00:45:52.143 "read": true, 00:45:52.143 "write": true, 00:45:52.143 "unmap": false, 00:45:52.143 "flush": false, 00:45:52.143 "reset": true, 00:45:52.143 "nvme_admin": false, 00:45:52.143 "nvme_io": false, 00:45:52.143 "nvme_io_md": false, 00:45:52.143 "write_zeroes": true, 00:45:52.143 "zcopy": false, 00:45:52.143 "get_zone_info": false, 00:45:52.143 "zone_management": false, 00:45:52.143 "zone_append": false, 00:45:52.143 "compare": false, 00:45:52.143 "compare_and_write": false, 00:45:52.143 "abort": false, 00:45:52.143 "seek_hole": false, 00:45:52.143 "seek_data": false, 00:45:52.143 "copy": false, 00:45:52.143 "nvme_iov_md": false 00:45:52.143 }, 00:45:52.143 "memory_domains": [ 00:45:52.143 { 00:45:52.143 "dma_device_id": "system", 00:45:52.143 "dma_device_type": 1 00:45:52.143 }, 00:45:52.143 { 00:45:52.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:52.143 "dma_device_type": 2 00:45:52.144 }, 00:45:52.144 { 00:45:52.144 "dma_device_id": "system", 00:45:52.144 "dma_device_type": 1 00:45:52.144 }, 00:45:52.144 { 00:45:52.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:45:52.144 "dma_device_type": 2 00:45:52.144 } 00:45:52.144 ], 00:45:52.144 "driver_specific": { 00:45:52.144 "raid": { 00:45:52.144 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:52.144 "strip_size_kb": 0, 00:45:52.144 "state": "online", 00:45:52.144 "raid_level": "raid1", 00:45:52.144 "superblock": true, 00:45:52.144 "num_base_bdevs": 2, 00:45:52.144 "num_base_bdevs_discovered": 2, 00:45:52.144 "num_base_bdevs_operational": 2, 00:45:52.144 "base_bdevs_list": [ 00:45:52.144 { 00:45:52.144 "name": "pt1", 00:45:52.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:45:52.144 "is_configured": true, 00:45:52.144 "data_offset": 256, 00:45:52.144 "data_size": 7936 00:45:52.144 }, 00:45:52.144 { 00:45:52.144 "name": "pt2", 00:45:52.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:52.144 "is_configured": true, 00:45:52.144 "data_offset": 256, 00:45:52.144 "data_size": 7936 00:45:52.144 } 00:45:52.144 ] 00:45:52.144 } 00:45:52.144 } 00:45:52.144 }' 00:45:52.144 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:45:52.144 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:45:52.144 pt2' 00:45:52.144 05:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.144 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.404 [2024-12-09 05:36:39.166298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5c32310a-25d0-481a-8211-4ce96bda72e1 '!=' 5c32310a-25d0-481a-8211-4ce96bda72e1 ']' 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.404 [2024-12-09 05:36:39.210098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:52.404 "name": "raid_bdev1", 00:45:52.404 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:52.404 "strip_size_kb": 0, 00:45:52.404 "state": "online", 00:45:52.404 "raid_level": "raid1", 00:45:52.404 "superblock": true, 00:45:52.404 "num_base_bdevs": 2, 00:45:52.404 "num_base_bdevs_discovered": 1, 00:45:52.404 "num_base_bdevs_operational": 1, 00:45:52.404 "base_bdevs_list": [ 00:45:52.404 { 00:45:52.404 "name": null, 00:45:52.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:52.404 "is_configured": false, 00:45:52.404 "data_offset": 0, 00:45:52.404 "data_size": 7936 00:45:52.404 }, 00:45:52.404 { 00:45:52.404 "name": "pt2", 00:45:52.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:52.404 "is_configured": true, 00:45:52.404 "data_offset": 256, 00:45:52.404 "data_size": 7936 00:45:52.404 } 00:45:52.404 ] 00:45:52.404 }' 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:52.404 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 [2024-12-09 05:36:39.746083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:52.973 [2024-12-09 05:36:39.746152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:52.973 [2024-12-09 05:36:39.746260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:52.973 [2024-12-09 05:36:39.746327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:52.973 [2024-12-09 05:36:39.746346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 [2024-12-09 05:36:39.826041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:52.973 [2024-12-09 05:36:39.826121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:52.973 [2024-12-09 05:36:39.826192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:45:52.973 [2024-12-09 05:36:39.826208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:52.973 [2024-12-09 05:36:39.829253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:52.973 [2024-12-09 05:36:39.829313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:52.973 [2024-12-09 05:36:39.829408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:45:52.973 [2024-12-09 05:36:39.829465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:52.973 [2024-12-09 05:36:39.829584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:45:52.973 [2024-12-09 05:36:39.829604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:45:52.973 [2024-12-09 05:36:39.829928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:45:52.973 [2024-12-09 05:36:39.830123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:45:52.973 [2024-12-09 05:36:39.830138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:45:52.973 [2024-12-09 05:36:39.830402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:52.973 pt2 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:52.973 "name": "raid_bdev1", 00:45:52.973 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:52.973 "strip_size_kb": 0, 00:45:52.973 "state": "online", 00:45:52.973 "raid_level": "raid1", 00:45:52.973 "superblock": true, 00:45:52.973 "num_base_bdevs": 2, 00:45:52.973 "num_base_bdevs_discovered": 1, 00:45:52.973 "num_base_bdevs_operational": 1, 00:45:52.973 "base_bdevs_list": [ 00:45:52.973 { 00:45:52.973 "name": null, 00:45:52.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:52.973 "is_configured": false, 00:45:52.973 "data_offset": 256, 00:45:52.973 "data_size": 7936 00:45:52.973 }, 00:45:52.973 { 00:45:52.973 "name": "pt2", 00:45:52.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:52.973 "is_configured": true, 00:45:52.973 "data_offset": 256, 00:45:52.973 "data_size": 7936 00:45:52.973 } 00:45:52.973 ] 00:45:52.973 }' 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:52.973 05:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:53.542 [2024-12-09 05:36:40.366591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:53.542 [2024-12-09 05:36:40.366637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:53.542 [2024-12-09 05:36:40.366743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:53.542 [2024-12-09 05:36:40.366843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:53.542 [2024-12-09 05:36:40.366861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.542 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:53.542 [2024-12-09 05:36:40.434610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:45:53.542 [2024-12-09 05:36:40.434707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:53.542 [2024-12-09 05:36:40.434738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:45:53.542 [2024-12-09 05:36:40.434754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:53.543 [2024-12-09 05:36:40.437992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:53.543 [2024-12-09 05:36:40.438050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:45:53.543 [2024-12-09 05:36:40.438211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:45:53.543 [2024-12-09 05:36:40.438270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:45:53.543 [2024-12-09 05:36:40.438492] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:45:53.543 [2024-12-09 05:36:40.438558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:53.543 [2024-12-09 05:36:40.438584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:45:53.543 [2024-12-09 05:36:40.438662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:53.543 [2024-12-09 05:36:40.438841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:45:53.543 [2024-12-09 05:36:40.438857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:45:53.543 pt1 00:45:53.543 [2024-12-09 05:36:40.439188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:45:53.543 [2024-12-09 05:36:40.439410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:45:53.543 [2024-12-09 05:36:40.439431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:45:53.543 [2024-12-09 05:36:40.439610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:53.543 "name": "raid_bdev1", 00:45:53.543 "uuid": "5c32310a-25d0-481a-8211-4ce96bda72e1", 00:45:53.543 "strip_size_kb": 0, 00:45:53.543 "state": "online", 00:45:53.543 "raid_level": "raid1", 00:45:53.543 "superblock": true, 00:45:53.543 "num_base_bdevs": 2, 00:45:53.543 "num_base_bdevs_discovered": 1, 00:45:53.543 "num_base_bdevs_operational": 1, 00:45:53.543 "base_bdevs_list": [ 00:45:53.543 { 00:45:53.543 "name": null, 00:45:53.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:53.543 "is_configured": false, 00:45:53.543 "data_offset": 256, 00:45:53.543 "data_size": 7936 00:45:53.543 }, 00:45:53.543 { 00:45:53.543 "name": "pt2", 00:45:53.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:45:53.543 "is_configured": true, 00:45:53.543 "data_offset": 256, 00:45:53.543 "data_size": 7936 00:45:53.543 } 00:45:53.543 ] 00:45:53.543 }' 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:53.543 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:54.111 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:45:54.111 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:54.111 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:54.111 05:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:45:54.111 05:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:54.111 [2024-12-09 05:36:41.043173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5c32310a-25d0-481a-8211-4ce96bda72e1 '!=' 5c32310a-25d0-481a-8211-4ce96bda72e1 ']' 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86720 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86720 ']' 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86720 00:45:54.111 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:45:54.370 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:54.371 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86720 00:45:54.371 killing process with pid 86720 00:45:54.371 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:54.371 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:54.371 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86720' 00:45:54.371 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86720 00:45:54.371 [2024-12-09 05:36:41.115834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:45:54.371 [2024-12-09 05:36:41.115947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:54.371 05:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86720 00:45:54.371 [2024-12-09 05:36:41.116033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:54.371 [2024-12-09 05:36:41.116056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:45:54.371 [2024-12-09 05:36:41.289685] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:45:55.749 05:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:45:55.749 00:45:55.749 real 0m6.965s 00:45:55.749 user 0m10.995s 00:45:55.749 sys 0m1.053s 00:45:55.749 ************************************ 00:45:55.749 END TEST raid_superblock_test_4k 00:45:55.749 ************************************ 00:45:55.749 05:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:55.749 05:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:45:55.749 05:36:42 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:45:55.749 05:36:42 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:45:55.749 05:36:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:45:55.749 05:36:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:55.749 05:36:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:45:55.749 ************************************ 00:45:55.749 START TEST raid_rebuild_test_sb_4k 00:45:55.749 ************************************ 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:45:55.749 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87054 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87054 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87054 ']' 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:55.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:55.750 05:36:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:55.750 [2024-12-09 05:36:42.618051] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:45:55.750 [2024-12-09 05:36:42.618522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:45:55.750 Zero copy mechanism will not be used. 00:45:55.750 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87054 ] 00:45:56.009 [2024-12-09 05:36:42.808331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:56.009 [2024-12-09 05:36:42.951723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.266 [2024-12-09 05:36:43.168230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:56.267 [2024-12-09 05:36:43.168273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.833 BaseBdev1_malloc 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.833 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.833 [2024-12-09 05:36:43.618144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:45:56.833 [2024-12-09 05:36:43.618249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:56.833 [2024-12-09 05:36:43.618283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:45:56.833 [2024-12-09 05:36:43.618302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:56.833 [2024-12-09 05:36:43.621339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:56.833 [2024-12-09 05:36:43.621410] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:45:56.833 BaseBdev1 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 BaseBdev2_malloc 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 [2024-12-09 05:36:43.676436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:45:56.834 [2024-12-09 05:36:43.676566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:56.834 [2024-12-09 05:36:43.676600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:45:56.834 [2024-12-09 05:36:43.676618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:56.834 [2024-12-09 05:36:43.679933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:56.834 [2024-12-09 05:36:43.679982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:45:56.834 BaseBdev2 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 spare_malloc 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 spare_delay 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 [2024-12-09 05:36:43.752071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:45:56.834 [2024-12-09 05:36:43.752185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:56.834 [2024-12-09 05:36:43.752216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:45:56.834 [2024-12-09 05:36:43.752235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:56.834 [2024-12-09 05:36:43.755257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:56.834 [2024-12-09 05:36:43.755625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:45:56.834 spare 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 [2024-12-09 05:36:43.764359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:56.834 [2024-12-09 05:36:43.767148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:45:56.834 [2024-12-09 05:36:43.767414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:45:56.834 [2024-12-09 05:36:43.767451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:45:56.834 [2024-12-09 05:36:43.767853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:45:56.834 [2024-12-09 05:36:43.768132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:45:56.834 [2024-12-09 05:36:43.768157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:45:56.834 [2024-12-09 05:36:43.768398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:56.834 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.091 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:57.091 "name": "raid_bdev1", 00:45:57.091 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:45:57.091 "strip_size_kb": 0, 00:45:57.091 "state": "online", 00:45:57.091 "raid_level": "raid1", 00:45:57.091 "superblock": true, 00:45:57.091 "num_base_bdevs": 2, 00:45:57.092 "num_base_bdevs_discovered": 2, 00:45:57.092 "num_base_bdevs_operational": 2, 00:45:57.092 "base_bdevs_list": [ 00:45:57.092 { 00:45:57.092 "name": "BaseBdev1", 00:45:57.092 "uuid": "32ddcb2a-1276-5a92-8de4-13431b20bc11", 00:45:57.092 "is_configured": true, 00:45:57.092 "data_offset": 256, 00:45:57.092 "data_size": 7936 00:45:57.092 }, 00:45:57.092 { 00:45:57.092 "name": "BaseBdev2", 00:45:57.092 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:45:57.092 "is_configured": true, 00:45:57.092 "data_offset": 256, 00:45:57.092 "data_size": 7936 00:45:57.092 } 00:45:57.092 ] 00:45:57.092 }' 00:45:57.092 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:57.092 05:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:57.658 [2024-12-09 05:36:44.336998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:45:57.658 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:57.659 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:45:57.659 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:57.659 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:57.659 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:45:57.918 [2024-12-09 05:36:44.728803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:45:57.918 /dev/nbd0 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:57.918 1+0 records in 00:45:57.918 1+0 records out 00:45:57.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568773 s, 7.2 MB/s 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:45:57.918 05:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:45:58.852 7936+0 records in 00:45:58.852 7936+0 records out 00:45:58.852 32505856 bytes (33 MB, 31 MiB) copied, 0.947833 s, 34.3 MB/s 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:58.852 05:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:59.110 [2024-12-09 05:36:46.025374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:59.110 [2024-12-09 05:36:46.041462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:45:59.110 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:59.111 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.370 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:45:59.370 "name": "raid_bdev1", 00:45:59.370 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:45:59.370 "strip_size_kb": 0, 00:45:59.370 "state": "online", 00:45:59.370 "raid_level": "raid1", 00:45:59.370 "superblock": true, 00:45:59.370 "num_base_bdevs": 2, 00:45:59.370 "num_base_bdevs_discovered": 1, 00:45:59.370 "num_base_bdevs_operational": 1, 00:45:59.370 "base_bdevs_list": [ 00:45:59.370 { 00:45:59.370 "name": null, 00:45:59.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:59.370 "is_configured": false, 00:45:59.370 "data_offset": 0, 00:45:59.370 "data_size": 7936 00:45:59.370 }, 00:45:59.370 { 00:45:59.370 "name": "BaseBdev2", 00:45:59.370 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:45:59.370 "is_configured": true, 00:45:59.370 "data_offset": 256, 00:45:59.370 "data_size": 7936 00:45:59.370 } 00:45:59.370 ] 00:45:59.370 }' 00:45:59.370 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:45:59.370 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:59.628 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:45:59.628 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.628 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:45:59.628 [2024-12-09 05:36:46.557822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:59.628 [2024-12-09 05:36:46.577083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:45:59.628 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.628 05:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:45:59.628 [2024-12-09 05:36:46.580094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:01.006 "name": "raid_bdev1", 00:46:01.006 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:01.006 "strip_size_kb": 0, 00:46:01.006 "state": "online", 00:46:01.006 "raid_level": "raid1", 00:46:01.006 "superblock": true, 00:46:01.006 "num_base_bdevs": 2, 00:46:01.006 "num_base_bdevs_discovered": 2, 00:46:01.006 "num_base_bdevs_operational": 2, 00:46:01.006 "process": { 00:46:01.006 "type": "rebuild", 00:46:01.006 "target": "spare", 00:46:01.006 "progress": { 00:46:01.006 "blocks": 2560, 00:46:01.006 "percent": 32 00:46:01.006 } 00:46:01.006 }, 00:46:01.006 "base_bdevs_list": [ 00:46:01.006 { 00:46:01.006 "name": "spare", 00:46:01.006 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:01.006 "is_configured": true, 00:46:01.006 "data_offset": 256, 00:46:01.006 "data_size": 7936 00:46:01.006 }, 00:46:01.006 { 00:46:01.006 "name": "BaseBdev2", 00:46:01.006 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:01.006 "is_configured": true, 00:46:01.006 "data_offset": 256, 00:46:01.006 "data_size": 7936 00:46:01.006 } 00:46:01.006 ] 00:46:01.006 }' 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:01.006 [2024-12-09 05:36:47.761510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:01.006 [2024-12-09 05:36:47.789491] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:46:01.006 [2024-12-09 05:36:47.789576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:01.006 [2024-12-09 05:36:47.789601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:01.006 [2024-12-09 05:36:47.789616] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.006 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:01.006 "name": "raid_bdev1", 00:46:01.006 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:01.006 "strip_size_kb": 0, 00:46:01.006 "state": "online", 00:46:01.006 "raid_level": "raid1", 00:46:01.006 "superblock": true, 00:46:01.006 "num_base_bdevs": 2, 00:46:01.006 "num_base_bdevs_discovered": 1, 00:46:01.006 "num_base_bdevs_operational": 1, 00:46:01.006 "base_bdevs_list": [ 00:46:01.006 { 00:46:01.006 "name": null, 00:46:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:01.007 "is_configured": false, 00:46:01.007 "data_offset": 0, 00:46:01.007 "data_size": 7936 00:46:01.007 }, 00:46:01.007 { 00:46:01.007 "name": "BaseBdev2", 00:46:01.007 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:01.007 "is_configured": true, 00:46:01.007 "data_offset": 256, 00:46:01.007 "data_size": 7936 00:46:01.007 } 00:46:01.007 ] 00:46:01.007 }' 00:46:01.007 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:01.007 05:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.572 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:01.573 "name": "raid_bdev1", 00:46:01.573 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:01.573 "strip_size_kb": 0, 00:46:01.573 "state": "online", 00:46:01.573 "raid_level": "raid1", 00:46:01.573 "superblock": true, 00:46:01.573 "num_base_bdevs": 2, 00:46:01.573 "num_base_bdevs_discovered": 1, 00:46:01.573 "num_base_bdevs_operational": 1, 00:46:01.573 "base_bdevs_list": [ 00:46:01.573 { 00:46:01.573 "name": null, 00:46:01.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:01.573 "is_configured": false, 00:46:01.573 "data_offset": 0, 00:46:01.573 "data_size": 7936 00:46:01.573 }, 00:46:01.573 { 00:46:01.573 "name": "BaseBdev2", 00:46:01.573 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:01.573 "is_configured": true, 00:46:01.573 "data_offset": 256, 00:46:01.573 "data_size": 7936 00:46:01.573 } 00:46:01.573 ] 00:46:01.573 }' 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.573 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:01.573 [2024-12-09 05:36:48.528039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:01.573 [2024-12-09 05:36:48.544299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:46:01.830 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.830 05:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:46:01.830 [2024-12-09 05:36:48.546960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:02.762 "name": "raid_bdev1", 00:46:02.762 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:02.762 "strip_size_kb": 0, 00:46:02.762 "state": "online", 00:46:02.762 "raid_level": "raid1", 00:46:02.762 "superblock": true, 00:46:02.762 "num_base_bdevs": 2, 00:46:02.762 "num_base_bdevs_discovered": 2, 00:46:02.762 "num_base_bdevs_operational": 2, 00:46:02.762 "process": { 00:46:02.762 "type": "rebuild", 00:46:02.762 "target": "spare", 00:46:02.762 "progress": { 00:46:02.762 "blocks": 2560, 00:46:02.762 "percent": 32 00:46:02.762 } 00:46:02.762 }, 00:46:02.762 "base_bdevs_list": [ 00:46:02.762 { 00:46:02.762 "name": "spare", 00:46:02.762 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:02.762 "is_configured": true, 00:46:02.762 "data_offset": 256, 00:46:02.762 "data_size": 7936 00:46:02.762 }, 00:46:02.762 { 00:46:02.762 "name": "BaseBdev2", 00:46:02.762 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:02.762 "is_configured": true, 00:46:02.762 "data_offset": 256, 00:46:02.762 "data_size": 7936 00:46:02.762 } 00:46:02.762 ] 00:46:02.762 }' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:46:02.762 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=745 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:02.762 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:02.763 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.763 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:03.020 "name": "raid_bdev1", 00:46:03.020 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:03.020 "strip_size_kb": 0, 00:46:03.020 "state": "online", 00:46:03.020 "raid_level": "raid1", 00:46:03.020 "superblock": true, 00:46:03.020 "num_base_bdevs": 2, 00:46:03.020 "num_base_bdevs_discovered": 2, 00:46:03.020 "num_base_bdevs_operational": 2, 00:46:03.020 "process": { 00:46:03.020 "type": "rebuild", 00:46:03.020 "target": "spare", 00:46:03.020 "progress": { 00:46:03.020 "blocks": 2816, 00:46:03.020 "percent": 35 00:46:03.020 } 00:46:03.020 }, 00:46:03.020 "base_bdevs_list": [ 00:46:03.020 { 00:46:03.020 "name": "spare", 00:46:03.020 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:03.020 "is_configured": true, 00:46:03.020 "data_offset": 256, 00:46:03.020 "data_size": 7936 00:46:03.020 }, 00:46:03.020 { 00:46:03.020 "name": "BaseBdev2", 00:46:03.020 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:03.020 "is_configured": true, 00:46:03.020 "data_offset": 256, 00:46:03.020 "data_size": 7936 00:46:03.020 } 00:46:03.020 ] 00:46:03.020 }' 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:03.020 05:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:03.955 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:04.212 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:04.212 "name": "raid_bdev1", 00:46:04.212 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:04.212 "strip_size_kb": 0, 00:46:04.212 "state": "online", 00:46:04.212 "raid_level": "raid1", 00:46:04.212 "superblock": true, 00:46:04.212 "num_base_bdevs": 2, 00:46:04.212 "num_base_bdevs_discovered": 2, 00:46:04.212 "num_base_bdevs_operational": 2, 00:46:04.212 "process": { 00:46:04.212 "type": "rebuild", 00:46:04.212 "target": "spare", 00:46:04.212 "progress": { 00:46:04.212 "blocks": 5888, 00:46:04.212 "percent": 74 00:46:04.212 } 00:46:04.212 }, 00:46:04.212 "base_bdevs_list": [ 00:46:04.212 { 00:46:04.212 "name": "spare", 00:46:04.212 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:04.212 "is_configured": true, 00:46:04.212 "data_offset": 256, 00:46:04.212 "data_size": 7936 00:46:04.212 }, 00:46:04.212 { 00:46:04.212 "name": "BaseBdev2", 00:46:04.212 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:04.212 "is_configured": true, 00:46:04.212 "data_offset": 256, 00:46:04.212 "data_size": 7936 00:46:04.212 } 00:46:04.212 ] 00:46:04.212 }' 00:46:04.212 05:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:04.212 05:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:04.212 05:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:04.212 05:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:04.212 05:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:46:04.777 [2024-12-09 05:36:51.669523] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:46:04.777 [2024-12-09 05:36:51.669637] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:46:04.777 [2024-12-09 05:36:51.669845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:05.343 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:05.343 "name": "raid_bdev1", 00:46:05.343 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:05.343 "strip_size_kb": 0, 00:46:05.343 "state": "online", 00:46:05.343 "raid_level": "raid1", 00:46:05.343 "superblock": true, 00:46:05.343 "num_base_bdevs": 2, 00:46:05.343 "num_base_bdevs_discovered": 2, 00:46:05.343 "num_base_bdevs_operational": 2, 00:46:05.343 "base_bdevs_list": [ 00:46:05.343 { 00:46:05.343 "name": "spare", 00:46:05.344 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:05.344 "is_configured": true, 00:46:05.344 "data_offset": 256, 00:46:05.344 "data_size": 7936 00:46:05.344 }, 00:46:05.344 { 00:46:05.344 "name": "BaseBdev2", 00:46:05.344 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:05.344 "is_configured": true, 00:46:05.344 "data_offset": 256, 00:46:05.344 "data_size": 7936 00:46:05.344 } 00:46:05.344 ] 00:46:05.344 }' 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:05.344 "name": "raid_bdev1", 00:46:05.344 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:05.344 "strip_size_kb": 0, 00:46:05.344 "state": "online", 00:46:05.344 "raid_level": "raid1", 00:46:05.344 "superblock": true, 00:46:05.344 "num_base_bdevs": 2, 00:46:05.344 "num_base_bdevs_discovered": 2, 00:46:05.344 "num_base_bdevs_operational": 2, 00:46:05.344 "base_bdevs_list": [ 00:46:05.344 { 00:46:05.344 "name": "spare", 00:46:05.344 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:05.344 "is_configured": true, 00:46:05.344 "data_offset": 256, 00:46:05.344 "data_size": 7936 00:46:05.344 }, 00:46:05.344 { 00:46:05.344 "name": "BaseBdev2", 00:46:05.344 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:05.344 "is_configured": true, 00:46:05.344 "data_offset": 256, 00:46:05.344 "data_size": 7936 00:46:05.344 } 00:46:05.344 ] 00:46:05.344 }' 00:46:05.344 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:05.602 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:05.603 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:05.603 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:05.603 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:05.603 "name": "raid_bdev1", 00:46:05.603 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:05.603 "strip_size_kb": 0, 00:46:05.603 "state": "online", 00:46:05.603 "raid_level": "raid1", 00:46:05.603 "superblock": true, 00:46:05.603 "num_base_bdevs": 2, 00:46:05.603 "num_base_bdevs_discovered": 2, 00:46:05.603 "num_base_bdevs_operational": 2, 00:46:05.603 "base_bdevs_list": [ 00:46:05.603 { 00:46:05.603 "name": "spare", 00:46:05.603 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:05.603 "is_configured": true, 00:46:05.603 "data_offset": 256, 00:46:05.603 "data_size": 7936 00:46:05.603 }, 00:46:05.603 { 00:46:05.603 "name": "BaseBdev2", 00:46:05.603 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:05.603 "is_configured": true, 00:46:05.603 "data_offset": 256, 00:46:05.603 "data_size": 7936 00:46:05.603 } 00:46:05.603 ] 00:46:05.603 }' 00:46:05.603 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:05.603 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:06.169 [2024-12-09 05:36:52.917957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:06.169 [2024-12-09 05:36:52.918192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:06.169 [2024-12-09 05:36:52.918431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:06.169 [2024-12-09 05:36:52.918669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:06.169 [2024-12-09 05:36:52.918701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:06.169 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:46:06.170 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:06.170 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:06.170 05:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:46:06.427 /dev/nbd0 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:06.427 1+0 records in 00:46:06.427 1+0 records out 00:46:06.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345098 s, 11.9 MB/s 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:06.427 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:06.428 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:46:06.992 /dev/nbd1 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:06.992 1+0 records in 00:46:06.992 1+0 records out 00:46:06.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341396 s, 12.0 MB/s 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:06.992 05:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:46:07.250 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:07.250 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:07.250 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:07.250 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:07.250 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:07.250 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:07.508 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:46:07.508 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:46:07.508 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:07.508 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:07.770 [2024-12-09 05:36:54.561492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:46:07.770 [2024-12-09 05:36:54.561560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:07.770 [2024-12-09 05:36:54.561599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:46:07.770 [2024-12-09 05:36:54.561615] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:07.770 [2024-12-09 05:36:54.564891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:07.770 [2024-12-09 05:36:54.564949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:46:07.770 [2024-12-09 05:36:54.565074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:46:07.770 [2024-12-09 05:36:54.565144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:07.770 [2024-12-09 05:36:54.565355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:46:07.770 spare 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:07.770 [2024-12-09 05:36:54.665507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:46:07.770 [2024-12-09 05:36:54.665709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:07.770 [2024-12-09 05:36:54.666166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:46:07.770 [2024-12-09 05:36:54.666612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:46:07.770 [2024-12-09 05:36:54.666641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:46:07.770 [2024-12-09 05:36:54.666933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:07.770 "name": "raid_bdev1", 00:46:07.770 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:07.770 "strip_size_kb": 0, 00:46:07.770 "state": "online", 00:46:07.770 "raid_level": "raid1", 00:46:07.770 "superblock": true, 00:46:07.770 "num_base_bdevs": 2, 00:46:07.770 "num_base_bdevs_discovered": 2, 00:46:07.770 "num_base_bdevs_operational": 2, 00:46:07.770 "base_bdevs_list": [ 00:46:07.770 { 00:46:07.770 "name": "spare", 00:46:07.770 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:07.770 "is_configured": true, 00:46:07.770 "data_offset": 256, 00:46:07.770 "data_size": 7936 00:46:07.770 }, 00:46:07.770 { 00:46:07.770 "name": "BaseBdev2", 00:46:07.770 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:07.770 "is_configured": true, 00:46:07.770 "data_offset": 256, 00:46:07.770 "data_size": 7936 00:46:07.770 } 00:46:07.770 ] 00:46:07.770 }' 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:07.770 05:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:08.344 "name": "raid_bdev1", 00:46:08.344 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:08.344 "strip_size_kb": 0, 00:46:08.344 "state": "online", 00:46:08.344 "raid_level": "raid1", 00:46:08.344 "superblock": true, 00:46:08.344 "num_base_bdevs": 2, 00:46:08.344 "num_base_bdevs_discovered": 2, 00:46:08.344 "num_base_bdevs_operational": 2, 00:46:08.344 "base_bdevs_list": [ 00:46:08.344 { 00:46:08.344 "name": "spare", 00:46:08.344 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:08.344 "is_configured": true, 00:46:08.344 "data_offset": 256, 00:46:08.344 "data_size": 7936 00:46:08.344 }, 00:46:08.344 { 00:46:08.344 "name": "BaseBdev2", 00:46:08.344 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:08.344 "is_configured": true, 00:46:08.344 "data_offset": 256, 00:46:08.344 "data_size": 7936 00:46:08.344 } 00:46:08.344 ] 00:46:08.344 }' 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:08.344 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:08.601 [2024-12-09 05:36:55.383762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:08.601 "name": "raid_bdev1", 00:46:08.601 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:08.601 "strip_size_kb": 0, 00:46:08.601 "state": "online", 00:46:08.601 "raid_level": "raid1", 00:46:08.601 "superblock": true, 00:46:08.601 "num_base_bdevs": 2, 00:46:08.601 "num_base_bdevs_discovered": 1, 00:46:08.601 "num_base_bdevs_operational": 1, 00:46:08.601 "base_bdevs_list": [ 00:46:08.601 { 00:46:08.601 "name": null, 00:46:08.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:08.601 "is_configured": false, 00:46:08.601 "data_offset": 0, 00:46:08.601 "data_size": 7936 00:46:08.601 }, 00:46:08.601 { 00:46:08.601 "name": "BaseBdev2", 00:46:08.601 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:08.601 "is_configured": true, 00:46:08.601 "data_offset": 256, 00:46:08.601 "data_size": 7936 00:46:08.601 } 00:46:08.601 ] 00:46:08.601 }' 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:08.601 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:09.166 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:46:09.166 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:09.166 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:09.166 [2024-12-09 05:36:55.944003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:09.166 [2024-12-09 05:36:55.944311] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:46:09.166 [2024-12-09 05:36:55.944345] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:46:09.166 [2024-12-09 05:36:55.944407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:09.166 [2024-12-09 05:36:55.960840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:46:09.166 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:09.166 05:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:46:09.166 [2024-12-09 05:36:55.963535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:10.100 05:36:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.100 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:10.100 "name": "raid_bdev1", 00:46:10.100 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:10.100 "strip_size_kb": 0, 00:46:10.100 "state": "online", 00:46:10.100 "raid_level": "raid1", 00:46:10.100 "superblock": true, 00:46:10.100 "num_base_bdevs": 2, 00:46:10.100 "num_base_bdevs_discovered": 2, 00:46:10.100 "num_base_bdevs_operational": 2, 00:46:10.100 "process": { 00:46:10.100 "type": "rebuild", 00:46:10.100 "target": "spare", 00:46:10.100 "progress": { 00:46:10.100 "blocks": 2560, 00:46:10.100 "percent": 32 00:46:10.100 } 00:46:10.100 }, 00:46:10.100 "base_bdevs_list": [ 00:46:10.100 { 00:46:10.100 "name": "spare", 00:46:10.100 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:10.100 "is_configured": true, 00:46:10.100 "data_offset": 256, 00:46:10.100 "data_size": 7936 00:46:10.100 }, 00:46:10.100 { 00:46:10.100 "name": "BaseBdev2", 00:46:10.100 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:10.100 "is_configured": true, 00:46:10.100 "data_offset": 256, 00:46:10.100 "data_size": 7936 00:46:10.100 } 00:46:10.100 ] 00:46:10.100 }' 00:46:10.100 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:10.359 [2024-12-09 05:36:57.145346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:10.359 [2024-12-09 05:36:57.173137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:46:10.359 [2024-12-09 05:36:57.173258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:10.359 [2024-12-09 05:36:57.173286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:10.359 [2024-12-09 05:36:57.173302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.359 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:10.359 "name": "raid_bdev1", 00:46:10.359 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:10.359 "strip_size_kb": 0, 00:46:10.359 "state": "online", 00:46:10.359 "raid_level": "raid1", 00:46:10.359 "superblock": true, 00:46:10.359 "num_base_bdevs": 2, 00:46:10.359 "num_base_bdevs_discovered": 1, 00:46:10.359 "num_base_bdevs_operational": 1, 00:46:10.359 "base_bdevs_list": [ 00:46:10.359 { 00:46:10.359 "name": null, 00:46:10.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:10.359 "is_configured": false, 00:46:10.359 "data_offset": 0, 00:46:10.359 "data_size": 7936 00:46:10.359 }, 00:46:10.359 { 00:46:10.360 "name": "BaseBdev2", 00:46:10.360 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:10.360 "is_configured": true, 00:46:10.360 "data_offset": 256, 00:46:10.360 "data_size": 7936 00:46:10.360 } 00:46:10.360 ] 00:46:10.360 }' 00:46:10.360 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:10.360 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:10.927 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:46:10.927 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.927 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:10.927 [2024-12-09 05:36:57.740120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:46:10.927 [2024-12-09 05:36:57.740215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:10.927 [2024-12-09 05:36:57.740252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:46:10.927 [2024-12-09 05:36:57.740270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:10.927 [2024-12-09 05:36:57.740982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:10.927 [2024-12-09 05:36:57.741021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:46:10.927 [2024-12-09 05:36:57.741156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:46:10.927 [2024-12-09 05:36:57.741182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:46:10.927 [2024-12-09 05:36:57.741208] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:46:10.927 [2024-12-09 05:36:57.741246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:10.927 [2024-12-09 05:36:57.758753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:46:10.927 spare 00:46:10.927 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.927 05:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:46:10.927 [2024-12-09 05:36:57.761648] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:11.865 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:11.866 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:11.866 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:11.866 "name": "raid_bdev1", 00:46:11.866 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:11.866 "strip_size_kb": 0, 00:46:11.866 "state": "online", 00:46:11.866 "raid_level": "raid1", 00:46:11.866 "superblock": true, 00:46:11.866 "num_base_bdevs": 2, 00:46:11.866 "num_base_bdevs_discovered": 2, 00:46:11.866 "num_base_bdevs_operational": 2, 00:46:11.866 "process": { 00:46:11.866 "type": "rebuild", 00:46:11.866 "target": "spare", 00:46:11.866 "progress": { 00:46:11.866 "blocks": 2560, 00:46:11.866 "percent": 32 00:46:11.866 } 00:46:11.866 }, 00:46:11.866 "base_bdevs_list": [ 00:46:11.866 { 00:46:11.866 "name": "spare", 00:46:11.866 "uuid": "a9561b5d-5bce-5c64-be20-889229730432", 00:46:11.866 "is_configured": true, 00:46:11.866 "data_offset": 256, 00:46:11.866 "data_size": 7936 00:46:11.866 }, 00:46:11.866 { 00:46:11.866 "name": "BaseBdev2", 00:46:11.866 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:11.866 "is_configured": true, 00:46:11.866 "data_offset": 256, 00:46:11.866 "data_size": 7936 00:46:11.866 } 00:46:11.866 ] 00:46:11.866 }' 00:46:11.866 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:12.125 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:12.125 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:12.125 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:12.125 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:46:12.125 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.125 05:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:12.125 [2024-12-09 05:36:58.939823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:12.125 [2024-12-09 05:36:58.971707] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:46:12.125 [2024-12-09 05:36:58.972055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:12.125 [2024-12-09 05:36:58.972100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:12.125 [2024-12-09 05:36:58.972114] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.125 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:12.125 "name": "raid_bdev1", 00:46:12.125 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:12.125 "strip_size_kb": 0, 00:46:12.125 "state": "online", 00:46:12.125 "raid_level": "raid1", 00:46:12.125 "superblock": true, 00:46:12.125 "num_base_bdevs": 2, 00:46:12.126 "num_base_bdevs_discovered": 1, 00:46:12.126 "num_base_bdevs_operational": 1, 00:46:12.126 "base_bdevs_list": [ 00:46:12.126 { 00:46:12.126 "name": null, 00:46:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:12.126 "is_configured": false, 00:46:12.126 "data_offset": 0, 00:46:12.126 "data_size": 7936 00:46:12.126 }, 00:46:12.126 { 00:46:12.126 "name": "BaseBdev2", 00:46:12.126 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:12.126 "is_configured": true, 00:46:12.126 "data_offset": 256, 00:46:12.126 "data_size": 7936 00:46:12.126 } 00:46:12.126 ] 00:46:12.126 }' 00:46:12.126 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:12.126 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:12.694 "name": "raid_bdev1", 00:46:12.694 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:12.694 "strip_size_kb": 0, 00:46:12.694 "state": "online", 00:46:12.694 "raid_level": "raid1", 00:46:12.694 "superblock": true, 00:46:12.694 "num_base_bdevs": 2, 00:46:12.694 "num_base_bdevs_discovered": 1, 00:46:12.694 "num_base_bdevs_operational": 1, 00:46:12.694 "base_bdevs_list": [ 00:46:12.694 { 00:46:12.694 "name": null, 00:46:12.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:12.694 "is_configured": false, 00:46:12.694 "data_offset": 0, 00:46:12.694 "data_size": 7936 00:46:12.694 }, 00:46:12.694 { 00:46:12.694 "name": "BaseBdev2", 00:46:12.694 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:12.694 "is_configured": true, 00:46:12.694 "data_offset": 256, 00:46:12.694 "data_size": 7936 00:46:12.694 } 00:46:12.694 ] 00:46:12.694 }' 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:12.694 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:12.953 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:12.953 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:46:12.953 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.953 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:12.953 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.954 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:46:12.954 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.954 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:12.954 [2024-12-09 05:36:59.722303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:46:12.954 [2024-12-09 05:36:59.722400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:12.954 [2024-12-09 05:36:59.722443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:46:12.954 [2024-12-09 05:36:59.722470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:12.954 [2024-12-09 05:36:59.723203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:12.954 [2024-12-09 05:36:59.723384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:46:12.954 [2024-12-09 05:36:59.723551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:46:12.954 [2024-12-09 05:36:59.723574] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:46:12.954 [2024-12-09 05:36:59.723590] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:46:12.954 [2024-12-09 05:36:59.723606] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:46:12.954 BaseBdev1 00:46:12.954 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.954 05:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:13.890 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:13.891 "name": "raid_bdev1", 00:46:13.891 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:13.891 "strip_size_kb": 0, 00:46:13.891 "state": "online", 00:46:13.891 "raid_level": "raid1", 00:46:13.891 "superblock": true, 00:46:13.891 "num_base_bdevs": 2, 00:46:13.891 "num_base_bdevs_discovered": 1, 00:46:13.891 "num_base_bdevs_operational": 1, 00:46:13.891 "base_bdevs_list": [ 00:46:13.891 { 00:46:13.891 "name": null, 00:46:13.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:13.891 "is_configured": false, 00:46:13.891 "data_offset": 0, 00:46:13.891 "data_size": 7936 00:46:13.891 }, 00:46:13.891 { 00:46:13.891 "name": "BaseBdev2", 00:46:13.891 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:13.891 "is_configured": true, 00:46:13.891 "data_offset": 256, 00:46:13.891 "data_size": 7936 00:46:13.891 } 00:46:13.891 ] 00:46:13.891 }' 00:46:13.891 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:13.891 05:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:14.458 "name": "raid_bdev1", 00:46:14.458 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:14.458 "strip_size_kb": 0, 00:46:14.458 "state": "online", 00:46:14.458 "raid_level": "raid1", 00:46:14.458 "superblock": true, 00:46:14.458 "num_base_bdevs": 2, 00:46:14.458 "num_base_bdevs_discovered": 1, 00:46:14.458 "num_base_bdevs_operational": 1, 00:46:14.458 "base_bdevs_list": [ 00:46:14.458 { 00:46:14.458 "name": null, 00:46:14.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:14.458 "is_configured": false, 00:46:14.458 "data_offset": 0, 00:46:14.458 "data_size": 7936 00:46:14.458 }, 00:46:14.458 { 00:46:14.458 "name": "BaseBdev2", 00:46:14.458 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:14.458 "is_configured": true, 00:46:14.458 "data_offset": 256, 00:46:14.458 "data_size": 7936 00:46:14.458 } 00:46:14.458 ] 00:46:14.458 }' 00:46:14.458 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:14.459 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:14.459 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:14.717 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:14.718 [2024-12-09 05:37:01.442910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:14.718 [2024-12-09 05:37:01.443334] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:46:14.718 [2024-12-09 05:37:01.443379] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:46:14.718 request: 00:46:14.718 { 00:46:14.718 "base_bdev": "BaseBdev1", 00:46:14.718 "raid_bdev": "raid_bdev1", 00:46:14.718 "method": "bdev_raid_add_base_bdev", 00:46:14.718 "req_id": 1 00:46:14.718 } 00:46:14.718 Got JSON-RPC error response 00:46:14.718 response: 00:46:14.718 { 00:46:14.718 "code": -22, 00:46:14.718 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:46:14.718 } 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:14.718 05:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:15.654 "name": "raid_bdev1", 00:46:15.654 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:15.654 "strip_size_kb": 0, 00:46:15.654 "state": "online", 00:46:15.654 "raid_level": "raid1", 00:46:15.654 "superblock": true, 00:46:15.654 "num_base_bdevs": 2, 00:46:15.654 "num_base_bdevs_discovered": 1, 00:46:15.654 "num_base_bdevs_operational": 1, 00:46:15.654 "base_bdevs_list": [ 00:46:15.654 { 00:46:15.654 "name": null, 00:46:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:15.654 "is_configured": false, 00:46:15.654 "data_offset": 0, 00:46:15.654 "data_size": 7936 00:46:15.654 }, 00:46:15.654 { 00:46:15.654 "name": "BaseBdev2", 00:46:15.654 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:15.654 "is_configured": true, 00:46:15.654 "data_offset": 256, 00:46:15.654 "data_size": 7936 00:46:15.654 } 00:46:15.654 ] 00:46:15.654 }' 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:15.654 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:16.221 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:16.221 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:16.221 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:16.221 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:16.221 05:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:16.221 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:16.221 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:16.221 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:16.221 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:16.221 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:16.221 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:16.221 "name": "raid_bdev1", 00:46:16.221 "uuid": "f09f0a7a-4455-491a-81c2-bd5602132675", 00:46:16.221 "strip_size_kb": 0, 00:46:16.221 "state": "online", 00:46:16.222 "raid_level": "raid1", 00:46:16.222 "superblock": true, 00:46:16.222 "num_base_bdevs": 2, 00:46:16.222 "num_base_bdevs_discovered": 1, 00:46:16.222 "num_base_bdevs_operational": 1, 00:46:16.222 "base_bdevs_list": [ 00:46:16.222 { 00:46:16.222 "name": null, 00:46:16.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:16.222 "is_configured": false, 00:46:16.222 "data_offset": 0, 00:46:16.222 "data_size": 7936 00:46:16.222 }, 00:46:16.222 { 00:46:16.222 "name": "BaseBdev2", 00:46:16.222 "uuid": "3498949d-524c-5617-b484-12c36d3b2f6d", 00:46:16.222 "is_configured": true, 00:46:16.222 "data_offset": 256, 00:46:16.222 "data_size": 7936 00:46:16.222 } 00:46:16.222 ] 00:46:16.222 }' 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87054 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87054 ']' 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87054 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:16.222 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87054 00:46:16.481 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:16.481 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:16.481 killing process with pid 87054 00:46:16.481 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87054' 00:46:16.481 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87054 00:46:16.481 Received shutdown signal, test time was about 60.000000 seconds 00:46:16.481 00:46:16.481 Latency(us) 00:46:16.481 [2024-12-09T05:37:03.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.481 [2024-12-09T05:37:03.453Z] =================================================================================================================== 00:46:16.481 [2024-12-09T05:37:03.453Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:16.481 [2024-12-09 05:37:03.205465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:46:16.481 05:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87054 00:46:16.481 [2024-12-09 05:37:03.205640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:16.481 [2024-12-09 05:37:03.205734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:16.481 [2024-12-09 05:37:03.205756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:46:16.740 [2024-12-09 05:37:03.470123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:46:17.686 05:37:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:46:17.686 00:46:17.686 real 0m22.060s 00:46:17.686 user 0m29.960s 00:46:17.686 sys 0m2.660s 00:46:17.686 ************************************ 00:46:17.686 END TEST raid_rebuild_test_sb_4k 00:46:17.686 ************************************ 00:46:17.686 05:37:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:17.686 05:37:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:46:17.687 05:37:04 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:46:17.687 05:37:04 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:46:17.687 05:37:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:46:17.687 05:37:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:17.687 05:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:46:17.687 ************************************ 00:46:17.687 START TEST raid_state_function_test_sb_md_separate 00:46:17.687 ************************************ 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87763 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87763' 00:46:17.687 Process raid pid: 87763 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87763 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87763 ']' 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:17.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:17.687 05:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:17.946 [2024-12-09 05:37:04.714708] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:17.946 [2024-12-09 05:37:04.714962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:17.946 [2024-12-09 05:37:04.897331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:18.204 [2024-12-09 05:37:05.028081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:18.462 [2024-12-09 05:37:05.240605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:18.462 [2024-12-09 05:37:05.240654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.028 [2024-12-09 05:37:05.752947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:46:19.028 [2024-12-09 05:37:05.753034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:46:19.028 [2024-12-09 05:37:05.753053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:46:19.028 [2024-12-09 05:37:05.753070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:19.028 "name": "Existed_Raid", 00:46:19.028 "uuid": "040ef783-b43c-43f9-93ee-9b7048eeb49f", 00:46:19.028 "strip_size_kb": 0, 00:46:19.028 "state": "configuring", 00:46:19.028 "raid_level": "raid1", 00:46:19.028 "superblock": true, 00:46:19.028 "num_base_bdevs": 2, 00:46:19.028 "num_base_bdevs_discovered": 0, 00:46:19.028 "num_base_bdevs_operational": 2, 00:46:19.028 "base_bdevs_list": [ 00:46:19.028 { 00:46:19.028 "name": "BaseBdev1", 00:46:19.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:19.028 "is_configured": false, 00:46:19.028 "data_offset": 0, 00:46:19.028 "data_size": 0 00:46:19.028 }, 00:46:19.028 { 00:46:19.028 "name": "BaseBdev2", 00:46:19.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:19.028 "is_configured": false, 00:46:19.028 "data_offset": 0, 00:46:19.028 "data_size": 0 00:46:19.028 } 00:46:19.028 ] 00:46:19.028 }' 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:19.028 05:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.285 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:46:19.285 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.285 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.543 [2024-12-09 05:37:06.261060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:46:19.543 [2024-12-09 05:37:06.261109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.543 [2024-12-09 05:37:06.273035] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:46:19.543 [2024-12-09 05:37:06.273092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:46:19.543 [2024-12-09 05:37:06.273108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:46:19.543 [2024-12-09 05:37:06.273127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.543 [2024-12-09 05:37:06.320129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:19.543 BaseBdev1 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.543 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.543 [ 00:46:19.543 { 00:46:19.543 "name": "BaseBdev1", 00:46:19.543 "aliases": [ 00:46:19.543 "c2558972-0706-43f0-a721-bffac7b201a5" 00:46:19.543 ], 00:46:19.543 "product_name": "Malloc disk", 00:46:19.543 "block_size": 4096, 00:46:19.543 "num_blocks": 8192, 00:46:19.543 "uuid": "c2558972-0706-43f0-a721-bffac7b201a5", 00:46:19.543 "md_size": 32, 00:46:19.543 "md_interleave": false, 00:46:19.543 "dif_type": 0, 00:46:19.543 "assigned_rate_limits": { 00:46:19.543 "rw_ios_per_sec": 0, 00:46:19.543 "rw_mbytes_per_sec": 0, 00:46:19.543 "r_mbytes_per_sec": 0, 00:46:19.543 "w_mbytes_per_sec": 0 00:46:19.543 }, 00:46:19.543 "claimed": true, 00:46:19.543 "claim_type": "exclusive_write", 00:46:19.543 "zoned": false, 00:46:19.543 "supported_io_types": { 00:46:19.543 "read": true, 00:46:19.543 "write": true, 00:46:19.543 "unmap": true, 00:46:19.543 "flush": true, 00:46:19.543 "reset": true, 00:46:19.543 "nvme_admin": false, 00:46:19.543 "nvme_io": false, 00:46:19.543 "nvme_io_md": false, 00:46:19.543 "write_zeroes": true, 00:46:19.543 "zcopy": true, 00:46:19.543 "get_zone_info": false, 00:46:19.543 "zone_management": false, 00:46:19.543 "zone_append": false, 00:46:19.543 "compare": false, 00:46:19.543 "compare_and_write": false, 00:46:19.543 "abort": true, 00:46:19.543 "seek_hole": false, 00:46:19.543 "seek_data": false, 00:46:19.543 "copy": true, 00:46:19.543 "nvme_iov_md": false 00:46:19.543 }, 00:46:19.543 "memory_domains": [ 00:46:19.543 { 00:46:19.543 "dma_device_id": "system", 00:46:19.543 "dma_device_type": 1 00:46:19.543 }, 00:46:19.543 { 00:46:19.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:19.543 "dma_device_type": 2 00:46:19.543 } 00:46:19.544 ], 00:46:19.544 "driver_specific": {} 00:46:19.544 } 00:46:19.544 ] 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:19.544 "name": "Existed_Raid", 00:46:19.544 "uuid": "8e9f539f-14b1-4627-99d9-53069c26dd39", 00:46:19.544 "strip_size_kb": 0, 00:46:19.544 "state": "configuring", 00:46:19.544 "raid_level": "raid1", 00:46:19.544 "superblock": true, 00:46:19.544 "num_base_bdevs": 2, 00:46:19.544 "num_base_bdevs_discovered": 1, 00:46:19.544 "num_base_bdevs_operational": 2, 00:46:19.544 "base_bdevs_list": [ 00:46:19.544 { 00:46:19.544 "name": "BaseBdev1", 00:46:19.544 "uuid": "c2558972-0706-43f0-a721-bffac7b201a5", 00:46:19.544 "is_configured": true, 00:46:19.544 "data_offset": 256, 00:46:19.544 "data_size": 7936 00:46:19.544 }, 00:46:19.544 { 00:46:19.544 "name": "BaseBdev2", 00:46:19.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:19.544 "is_configured": false, 00:46:19.544 "data_offset": 0, 00:46:19.544 "data_size": 0 00:46:19.544 } 00:46:19.544 ] 00:46:19.544 }' 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:19.544 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.110 [2024-12-09 05:37:06.868404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:46:20.110 [2024-12-09 05:37:06.868487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.110 [2024-12-09 05:37:06.880418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:20.110 [2024-12-09 05:37:06.883098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:46:20.110 [2024-12-09 05:37:06.883380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.110 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.111 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.111 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:20.111 "name": "Existed_Raid", 00:46:20.111 "uuid": "a8c4f066-0fe1-433b-9629-9baacd80608d", 00:46:20.111 "strip_size_kb": 0, 00:46:20.111 "state": "configuring", 00:46:20.111 "raid_level": "raid1", 00:46:20.111 "superblock": true, 00:46:20.111 "num_base_bdevs": 2, 00:46:20.111 "num_base_bdevs_discovered": 1, 00:46:20.111 "num_base_bdevs_operational": 2, 00:46:20.111 "base_bdevs_list": [ 00:46:20.111 { 00:46:20.111 "name": "BaseBdev1", 00:46:20.111 "uuid": "c2558972-0706-43f0-a721-bffac7b201a5", 00:46:20.111 "is_configured": true, 00:46:20.111 "data_offset": 256, 00:46:20.111 "data_size": 7936 00:46:20.111 }, 00:46:20.111 { 00:46:20.111 "name": "BaseBdev2", 00:46:20.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:20.111 "is_configured": false, 00:46:20.111 "data_offset": 0, 00:46:20.111 "data_size": 0 00:46:20.111 } 00:46:20.111 ] 00:46:20.111 }' 00:46:20.111 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:20.111 05:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.677 [2024-12-09 05:37:07.470320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:46:20.677 [2024-12-09 05:37:07.470686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:46:20.677 [2024-12-09 05:37:07.470711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:20.677 [2024-12-09 05:37:07.470926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:46:20.677 BaseBdev2 00:46:20.677 [2024-12-09 05:37:07.471180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:46:20.677 [2024-12-09 05:37:07.471204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:46:20.677 [2024-12-09 05:37:07.471329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.677 [ 00:46:20.677 { 00:46:20.677 "name": "BaseBdev2", 00:46:20.677 "aliases": [ 00:46:20.677 "5dbbb6e8-876a-4a14-9e40-8765f255d51c" 00:46:20.677 ], 00:46:20.677 "product_name": "Malloc disk", 00:46:20.677 "block_size": 4096, 00:46:20.677 "num_blocks": 8192, 00:46:20.677 "uuid": "5dbbb6e8-876a-4a14-9e40-8765f255d51c", 00:46:20.677 "md_size": 32, 00:46:20.677 "md_interleave": false, 00:46:20.677 "dif_type": 0, 00:46:20.677 "assigned_rate_limits": { 00:46:20.677 "rw_ios_per_sec": 0, 00:46:20.677 "rw_mbytes_per_sec": 0, 00:46:20.677 "r_mbytes_per_sec": 0, 00:46:20.677 "w_mbytes_per_sec": 0 00:46:20.677 }, 00:46:20.677 "claimed": true, 00:46:20.677 "claim_type": "exclusive_write", 00:46:20.677 "zoned": false, 00:46:20.677 "supported_io_types": { 00:46:20.677 "read": true, 00:46:20.677 "write": true, 00:46:20.677 "unmap": true, 00:46:20.677 "flush": true, 00:46:20.677 "reset": true, 00:46:20.677 "nvme_admin": false, 00:46:20.677 "nvme_io": false, 00:46:20.677 "nvme_io_md": false, 00:46:20.677 "write_zeroes": true, 00:46:20.677 "zcopy": true, 00:46:20.677 "get_zone_info": false, 00:46:20.677 "zone_management": false, 00:46:20.677 "zone_append": false, 00:46:20.677 "compare": false, 00:46:20.677 "compare_and_write": false, 00:46:20.677 "abort": true, 00:46:20.677 "seek_hole": false, 00:46:20.677 "seek_data": false, 00:46:20.677 "copy": true, 00:46:20.677 "nvme_iov_md": false 00:46:20.677 }, 00:46:20.677 "memory_domains": [ 00:46:20.677 { 00:46:20.677 "dma_device_id": "system", 00:46:20.677 "dma_device_type": 1 00:46:20.677 }, 00:46:20.677 { 00:46:20.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:20.677 "dma_device_type": 2 00:46:20.677 } 00:46:20.677 ], 00:46:20.677 "driver_specific": {} 00:46:20.677 } 00:46:20.677 ] 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:20.677 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:20.678 "name": "Existed_Raid", 00:46:20.678 "uuid": "a8c4f066-0fe1-433b-9629-9baacd80608d", 00:46:20.678 "strip_size_kb": 0, 00:46:20.678 "state": "online", 00:46:20.678 "raid_level": "raid1", 00:46:20.678 "superblock": true, 00:46:20.678 "num_base_bdevs": 2, 00:46:20.678 "num_base_bdevs_discovered": 2, 00:46:20.678 "num_base_bdevs_operational": 2, 00:46:20.678 "base_bdevs_list": [ 00:46:20.678 { 00:46:20.678 "name": "BaseBdev1", 00:46:20.678 "uuid": "c2558972-0706-43f0-a721-bffac7b201a5", 00:46:20.678 "is_configured": true, 00:46:20.678 "data_offset": 256, 00:46:20.678 "data_size": 7936 00:46:20.678 }, 00:46:20.678 { 00:46:20.678 "name": "BaseBdev2", 00:46:20.678 "uuid": "5dbbb6e8-876a-4a14-9e40-8765f255d51c", 00:46:20.678 "is_configured": true, 00:46:20.678 "data_offset": 256, 00:46:20.678 "data_size": 7936 00:46:20.678 } 00:46:20.678 ] 00:46:20.678 }' 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:20.678 05:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:46:21.243 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:21.244 [2024-12-09 05:37:08.059122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:46:21.244 "name": "Existed_Raid", 00:46:21.244 "aliases": [ 00:46:21.244 "a8c4f066-0fe1-433b-9629-9baacd80608d" 00:46:21.244 ], 00:46:21.244 "product_name": "Raid Volume", 00:46:21.244 "block_size": 4096, 00:46:21.244 "num_blocks": 7936, 00:46:21.244 "uuid": "a8c4f066-0fe1-433b-9629-9baacd80608d", 00:46:21.244 "md_size": 32, 00:46:21.244 "md_interleave": false, 00:46:21.244 "dif_type": 0, 00:46:21.244 "assigned_rate_limits": { 00:46:21.244 "rw_ios_per_sec": 0, 00:46:21.244 "rw_mbytes_per_sec": 0, 00:46:21.244 "r_mbytes_per_sec": 0, 00:46:21.244 "w_mbytes_per_sec": 0 00:46:21.244 }, 00:46:21.244 "claimed": false, 00:46:21.244 "zoned": false, 00:46:21.244 "supported_io_types": { 00:46:21.244 "read": true, 00:46:21.244 "write": true, 00:46:21.244 "unmap": false, 00:46:21.244 "flush": false, 00:46:21.244 "reset": true, 00:46:21.244 "nvme_admin": false, 00:46:21.244 "nvme_io": false, 00:46:21.244 "nvme_io_md": false, 00:46:21.244 "write_zeroes": true, 00:46:21.244 "zcopy": false, 00:46:21.244 "get_zone_info": false, 00:46:21.244 "zone_management": false, 00:46:21.244 "zone_append": false, 00:46:21.244 "compare": false, 00:46:21.244 "compare_and_write": false, 00:46:21.244 "abort": false, 00:46:21.244 "seek_hole": false, 00:46:21.244 "seek_data": false, 00:46:21.244 "copy": false, 00:46:21.244 "nvme_iov_md": false 00:46:21.244 }, 00:46:21.244 "memory_domains": [ 00:46:21.244 { 00:46:21.244 "dma_device_id": "system", 00:46:21.244 "dma_device_type": 1 00:46:21.244 }, 00:46:21.244 { 00:46:21.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:21.244 "dma_device_type": 2 00:46:21.244 }, 00:46:21.244 { 00:46:21.244 "dma_device_id": "system", 00:46:21.244 "dma_device_type": 1 00:46:21.244 }, 00:46:21.244 { 00:46:21.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:21.244 "dma_device_type": 2 00:46:21.244 } 00:46:21.244 ], 00:46:21.244 "driver_specific": { 00:46:21.244 "raid": { 00:46:21.244 "uuid": "a8c4f066-0fe1-433b-9629-9baacd80608d", 00:46:21.244 "strip_size_kb": 0, 00:46:21.244 "state": "online", 00:46:21.244 "raid_level": "raid1", 00:46:21.244 "superblock": true, 00:46:21.244 "num_base_bdevs": 2, 00:46:21.244 "num_base_bdevs_discovered": 2, 00:46:21.244 "num_base_bdevs_operational": 2, 00:46:21.244 "base_bdevs_list": [ 00:46:21.244 { 00:46:21.244 "name": "BaseBdev1", 00:46:21.244 "uuid": "c2558972-0706-43f0-a721-bffac7b201a5", 00:46:21.244 "is_configured": true, 00:46:21.244 "data_offset": 256, 00:46:21.244 "data_size": 7936 00:46:21.244 }, 00:46:21.244 { 00:46:21.244 "name": "BaseBdev2", 00:46:21.244 "uuid": "5dbbb6e8-876a-4a14-9e40-8765f255d51c", 00:46:21.244 "is_configured": true, 00:46:21.244 "data_offset": 256, 00:46:21.244 "data_size": 7936 00:46:21.244 } 00:46:21.244 ] 00:46:21.244 } 00:46:21.244 } 00:46:21.244 }' 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:46:21.244 BaseBdev2' 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:21.244 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:21.502 [2024-12-09 05:37:08.326724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:21.502 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.760 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:21.760 "name": "Existed_Raid", 00:46:21.760 "uuid": "a8c4f066-0fe1-433b-9629-9baacd80608d", 00:46:21.760 "strip_size_kb": 0, 00:46:21.760 "state": "online", 00:46:21.760 "raid_level": "raid1", 00:46:21.760 "superblock": true, 00:46:21.760 "num_base_bdevs": 2, 00:46:21.760 "num_base_bdevs_discovered": 1, 00:46:21.760 "num_base_bdevs_operational": 1, 00:46:21.760 "base_bdevs_list": [ 00:46:21.760 { 00:46:21.760 "name": null, 00:46:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:21.760 "is_configured": false, 00:46:21.760 "data_offset": 0, 00:46:21.760 "data_size": 7936 00:46:21.760 }, 00:46:21.760 { 00:46:21.760 "name": "BaseBdev2", 00:46:21.760 "uuid": "5dbbb6e8-876a-4a14-9e40-8765f255d51c", 00:46:21.760 "is_configured": true, 00:46:21.760 "data_offset": 256, 00:46:21.760 "data_size": 7936 00:46:21.760 } 00:46:21.760 ] 00:46:21.760 }' 00:46:21.760 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:21.760 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:22.018 05:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:22.277 [2024-12-09 05:37:09.036282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:46:22.277 [2024-12-09 05:37:09.036467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:22.277 [2024-12-09 05:37:09.137063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:22.277 [2024-12-09 05:37:09.137134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:22.277 [2024-12-09 05:37:09.137162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:46:22.277 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87763 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87763 ']' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87763 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87763 00:46:22.278 killing process with pid 87763 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87763' 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87763 00:46:22.278 [2024-12-09 05:37:09.232844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:46:22.278 05:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87763 00:46:22.278 [2024-12-09 05:37:09.248189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:46:23.671 ************************************ 00:46:23.671 END TEST raid_state_function_test_sb_md_separate 00:46:23.671 ************************************ 00:46:23.671 05:37:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:46:23.671 00:46:23.671 real 0m5.858s 00:46:23.671 user 0m8.704s 00:46:23.671 sys 0m0.860s 00:46:23.671 05:37:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:23.671 05:37:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:23.671 05:37:10 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:46:23.671 05:37:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:23.671 05:37:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:23.671 05:37:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:46:23.671 ************************************ 00:46:23.671 START TEST raid_superblock_test_md_separate 00:46:23.671 ************************************ 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88021 00:46:23.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88021 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88021 ']' 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:23.671 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:23.672 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:23.672 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:23.672 05:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:23.930 [2024-12-09 05:37:10.656872] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:23.930 [2024-12-09 05:37:10.657094] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88021 ] 00:46:23.930 [2024-12-09 05:37:10.845070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:24.189 [2024-12-09 05:37:10.988027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.448 [2024-12-09 05:37:11.212611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:24.448 [2024-12-09 05:37:11.213029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.706 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:24.966 malloc1 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:24.966 [2024-12-09 05:37:11.724458] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:46:24.966 [2024-12-09 05:37:11.724567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:24.966 [2024-12-09 05:37:11.724606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:46:24.966 [2024-12-09 05:37:11.724623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:24.966 [2024-12-09 05:37:11.727574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:24.966 [2024-12-09 05:37:11.727851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:46:24.966 pt1 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:46:24.966 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:24.967 malloc2 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:24.967 [2024-12-09 05:37:11.782321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:46:24.967 [2024-12-09 05:37:11.782435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:24.967 [2024-12-09 05:37:11.782475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:46:24.967 [2024-12-09 05:37:11.782490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:24.967 [2024-12-09 05:37:11.785326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:24.967 [2024-12-09 05:37:11.785368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:46:24.967 pt2 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:24.967 [2024-12-09 05:37:11.794356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:46:24.967 [2024-12-09 05:37:11.797008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:46:24.967 [2024-12-09 05:37:11.797299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:46:24.967 [2024-12-09 05:37:11.797321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:24.967 [2024-12-09 05:37:11.797437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:46:24.967 [2024-12-09 05:37:11.797616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:46:24.967 [2024-12-09 05:37:11.797635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:46:24.967 [2024-12-09 05:37:11.797789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:24.967 "name": "raid_bdev1", 00:46:24.967 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:24.967 "strip_size_kb": 0, 00:46:24.967 "state": "online", 00:46:24.967 "raid_level": "raid1", 00:46:24.967 "superblock": true, 00:46:24.967 "num_base_bdevs": 2, 00:46:24.967 "num_base_bdevs_discovered": 2, 00:46:24.967 "num_base_bdevs_operational": 2, 00:46:24.967 "base_bdevs_list": [ 00:46:24.967 { 00:46:24.967 "name": "pt1", 00:46:24.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:46:24.967 "is_configured": true, 00:46:24.967 "data_offset": 256, 00:46:24.967 "data_size": 7936 00:46:24.967 }, 00:46:24.967 { 00:46:24.967 "name": "pt2", 00:46:24.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:24.967 "is_configured": true, 00:46:24.967 "data_offset": 256, 00:46:24.967 "data_size": 7936 00:46:24.967 } 00:46:24.967 ] 00:46:24.967 }' 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:24.967 05:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.546 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:46:25.546 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:46:25.546 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:46:25.546 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:46:25.546 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.547 [2024-12-09 05:37:12.326962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:46:25.547 "name": "raid_bdev1", 00:46:25.547 "aliases": [ 00:46:25.547 "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa" 00:46:25.547 ], 00:46:25.547 "product_name": "Raid Volume", 00:46:25.547 "block_size": 4096, 00:46:25.547 "num_blocks": 7936, 00:46:25.547 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:25.547 "md_size": 32, 00:46:25.547 "md_interleave": false, 00:46:25.547 "dif_type": 0, 00:46:25.547 "assigned_rate_limits": { 00:46:25.547 "rw_ios_per_sec": 0, 00:46:25.547 "rw_mbytes_per_sec": 0, 00:46:25.547 "r_mbytes_per_sec": 0, 00:46:25.547 "w_mbytes_per_sec": 0 00:46:25.547 }, 00:46:25.547 "claimed": false, 00:46:25.547 "zoned": false, 00:46:25.547 "supported_io_types": { 00:46:25.547 "read": true, 00:46:25.547 "write": true, 00:46:25.547 "unmap": false, 00:46:25.547 "flush": false, 00:46:25.547 "reset": true, 00:46:25.547 "nvme_admin": false, 00:46:25.547 "nvme_io": false, 00:46:25.547 "nvme_io_md": false, 00:46:25.547 "write_zeroes": true, 00:46:25.547 "zcopy": false, 00:46:25.547 "get_zone_info": false, 00:46:25.547 "zone_management": false, 00:46:25.547 "zone_append": false, 00:46:25.547 "compare": false, 00:46:25.547 "compare_and_write": false, 00:46:25.547 "abort": false, 00:46:25.547 "seek_hole": false, 00:46:25.547 "seek_data": false, 00:46:25.547 "copy": false, 00:46:25.547 "nvme_iov_md": false 00:46:25.547 }, 00:46:25.547 "memory_domains": [ 00:46:25.547 { 00:46:25.547 "dma_device_id": "system", 00:46:25.547 "dma_device_type": 1 00:46:25.547 }, 00:46:25.547 { 00:46:25.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:25.547 "dma_device_type": 2 00:46:25.547 }, 00:46:25.547 { 00:46:25.547 "dma_device_id": "system", 00:46:25.547 "dma_device_type": 1 00:46:25.547 }, 00:46:25.547 { 00:46:25.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:25.547 "dma_device_type": 2 00:46:25.547 } 00:46:25.547 ], 00:46:25.547 "driver_specific": { 00:46:25.547 "raid": { 00:46:25.547 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:25.547 "strip_size_kb": 0, 00:46:25.547 "state": "online", 00:46:25.547 "raid_level": "raid1", 00:46:25.547 "superblock": true, 00:46:25.547 "num_base_bdevs": 2, 00:46:25.547 "num_base_bdevs_discovered": 2, 00:46:25.547 "num_base_bdevs_operational": 2, 00:46:25.547 "base_bdevs_list": [ 00:46:25.547 { 00:46:25.547 "name": "pt1", 00:46:25.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:46:25.547 "is_configured": true, 00:46:25.547 "data_offset": 256, 00:46:25.547 "data_size": 7936 00:46:25.547 }, 00:46:25.547 { 00:46:25.547 "name": "pt2", 00:46:25.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:25.547 "is_configured": true, 00:46:25.547 "data_offset": 256, 00:46:25.547 "data_size": 7936 00:46:25.547 } 00:46:25.547 ] 00:46:25.547 } 00:46:25.547 } 00:46:25.547 }' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:46:25.547 pt2' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.547 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.806 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:46:25.806 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:46:25.806 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:25.806 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:46:25.806 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 [2024-12-09 05:37:12.582933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa ']' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 [2024-12-09 05:37:12.630580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:25.807 [2024-12-09 05:37:12.630610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:25.807 [2024-12-09 05:37:12.630722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:25.807 [2024-12-09 05:37:12.630823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:25.807 [2024-12-09 05:37:12.630846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:25.807 [2024-12-09 05:37:12.766661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:46:25.807 [2024-12-09 05:37:12.769296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:46:25.807 [2024-12-09 05:37:12.769417] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:46:25.807 [2024-12-09 05:37:12.769514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:46:25.807 [2024-12-09 05:37:12.769542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:25.807 [2024-12-09 05:37:12.769558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:46:25.807 request: 00:46:25.807 { 00:46:25.807 "name": "raid_bdev1", 00:46:25.807 "raid_level": "raid1", 00:46:25.807 "base_bdevs": [ 00:46:25.807 "malloc1", 00:46:25.807 "malloc2" 00:46:25.807 ], 00:46:25.807 "superblock": false, 00:46:25.807 "method": "bdev_raid_create", 00:46:25.807 "req_id": 1 00:46:25.807 } 00:46:25.807 Got JSON-RPC error response 00:46:25.807 response: 00:46:25.807 { 00:46:25.807 "code": -17, 00:46:25.807 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:46:25.807 } 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:25.807 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.067 [2024-12-09 05:37:12.834672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:46:26.067 [2024-12-09 05:37:12.834904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:26.067 [2024-12-09 05:37:12.835042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:46:26.067 [2024-12-09 05:37:12.835183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:26.067 [2024-12-09 05:37:12.838074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:26.067 [2024-12-09 05:37:12.838244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:46:26.067 [2024-12-09 05:37:12.838417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:46:26.067 [2024-12-09 05:37:12.838612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:46:26.067 pt1 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:26.067 "name": "raid_bdev1", 00:46:26.067 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:26.067 "strip_size_kb": 0, 00:46:26.067 "state": "configuring", 00:46:26.067 "raid_level": "raid1", 00:46:26.067 "superblock": true, 00:46:26.067 "num_base_bdevs": 2, 00:46:26.067 "num_base_bdevs_discovered": 1, 00:46:26.067 "num_base_bdevs_operational": 2, 00:46:26.067 "base_bdevs_list": [ 00:46:26.067 { 00:46:26.067 "name": "pt1", 00:46:26.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:46:26.067 "is_configured": true, 00:46:26.067 "data_offset": 256, 00:46:26.067 "data_size": 7936 00:46:26.067 }, 00:46:26.067 { 00:46:26.067 "name": null, 00:46:26.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:26.067 "is_configured": false, 00:46:26.067 "data_offset": 256, 00:46:26.067 "data_size": 7936 00:46:26.067 } 00:46:26.067 ] 00:46:26.067 }' 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:26.067 05:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.636 [2024-12-09 05:37:13.323110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:46:26.636 [2024-12-09 05:37:13.323364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:26.636 [2024-12-09 05:37:13.323408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:46:26.636 [2024-12-09 05:37:13.323429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:26.636 [2024-12-09 05:37:13.323745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:26.636 [2024-12-09 05:37:13.323801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:46:26.636 [2024-12-09 05:37:13.323881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:46:26.636 [2024-12-09 05:37:13.323919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:46:26.636 [2024-12-09 05:37:13.324068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:46:26.636 [2024-12-09 05:37:13.324096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:26.636 [2024-12-09 05:37:13.324197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:46:26.636 [2024-12-09 05:37:13.324357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:46:26.636 [2024-12-09 05:37:13.324373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:46:26.636 [2024-12-09 05:37:13.324502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:26.636 pt2 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:26.636 "name": "raid_bdev1", 00:46:26.636 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:26.636 "strip_size_kb": 0, 00:46:26.636 "state": "online", 00:46:26.636 "raid_level": "raid1", 00:46:26.636 "superblock": true, 00:46:26.636 "num_base_bdevs": 2, 00:46:26.636 "num_base_bdevs_discovered": 2, 00:46:26.636 "num_base_bdevs_operational": 2, 00:46:26.636 "base_bdevs_list": [ 00:46:26.636 { 00:46:26.636 "name": "pt1", 00:46:26.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:46:26.636 "is_configured": true, 00:46:26.636 "data_offset": 256, 00:46:26.636 "data_size": 7936 00:46:26.636 }, 00:46:26.636 { 00:46:26.636 "name": "pt2", 00:46:26.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:26.636 "is_configured": true, 00:46:26.636 "data_offset": 256, 00:46:26.636 "data_size": 7936 00:46:26.636 } 00:46:26.636 ] 00:46:26.636 }' 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:26.636 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.895 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:26.895 [2024-12-09 05:37:13.852038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:27.154 05:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.155 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:46:27.155 "name": "raid_bdev1", 00:46:27.155 "aliases": [ 00:46:27.155 "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa" 00:46:27.155 ], 00:46:27.155 "product_name": "Raid Volume", 00:46:27.155 "block_size": 4096, 00:46:27.155 "num_blocks": 7936, 00:46:27.155 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:27.155 "md_size": 32, 00:46:27.155 "md_interleave": false, 00:46:27.155 "dif_type": 0, 00:46:27.155 "assigned_rate_limits": { 00:46:27.155 "rw_ios_per_sec": 0, 00:46:27.155 "rw_mbytes_per_sec": 0, 00:46:27.155 "r_mbytes_per_sec": 0, 00:46:27.155 "w_mbytes_per_sec": 0 00:46:27.155 }, 00:46:27.155 "claimed": false, 00:46:27.155 "zoned": false, 00:46:27.155 "supported_io_types": { 00:46:27.155 "read": true, 00:46:27.155 "write": true, 00:46:27.155 "unmap": false, 00:46:27.155 "flush": false, 00:46:27.155 "reset": true, 00:46:27.155 "nvme_admin": false, 00:46:27.155 "nvme_io": false, 00:46:27.155 "nvme_io_md": false, 00:46:27.155 "write_zeroes": true, 00:46:27.155 "zcopy": false, 00:46:27.155 "get_zone_info": false, 00:46:27.155 "zone_management": false, 00:46:27.155 "zone_append": false, 00:46:27.155 "compare": false, 00:46:27.155 "compare_and_write": false, 00:46:27.155 "abort": false, 00:46:27.155 "seek_hole": false, 00:46:27.155 "seek_data": false, 00:46:27.155 "copy": false, 00:46:27.155 "nvme_iov_md": false 00:46:27.155 }, 00:46:27.155 "memory_domains": [ 00:46:27.155 { 00:46:27.155 "dma_device_id": "system", 00:46:27.155 "dma_device_type": 1 00:46:27.155 }, 00:46:27.155 { 00:46:27.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:27.155 "dma_device_type": 2 00:46:27.155 }, 00:46:27.155 { 00:46:27.155 "dma_device_id": "system", 00:46:27.155 "dma_device_type": 1 00:46:27.155 }, 00:46:27.155 { 00:46:27.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:27.155 "dma_device_type": 2 00:46:27.155 } 00:46:27.155 ], 00:46:27.155 "driver_specific": { 00:46:27.155 "raid": { 00:46:27.155 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:27.155 "strip_size_kb": 0, 00:46:27.155 "state": "online", 00:46:27.155 "raid_level": "raid1", 00:46:27.155 "superblock": true, 00:46:27.155 "num_base_bdevs": 2, 00:46:27.155 "num_base_bdevs_discovered": 2, 00:46:27.155 "num_base_bdevs_operational": 2, 00:46:27.155 "base_bdevs_list": [ 00:46:27.155 { 00:46:27.155 "name": "pt1", 00:46:27.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:46:27.155 "is_configured": true, 00:46:27.155 "data_offset": 256, 00:46:27.155 "data_size": 7936 00:46:27.155 }, 00:46:27.155 { 00:46:27.155 "name": "pt2", 00:46:27.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:27.155 "is_configured": true, 00:46:27.155 "data_offset": 256, 00:46:27.155 "data_size": 7936 00:46:27.155 } 00:46:27.155 ] 00:46:27.155 } 00:46:27.155 } 00:46:27.155 }' 00:46:27.155 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:46:27.155 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:46:27.155 pt2' 00:46:27.155 05:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:46:27.155 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.155 [2024-12-09 05:37:14.120145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa '!=' b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa ']' 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.414 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.414 [2024-12-09 05:37:14.171844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:27.415 "name": "raid_bdev1", 00:46:27.415 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:27.415 "strip_size_kb": 0, 00:46:27.415 "state": "online", 00:46:27.415 "raid_level": "raid1", 00:46:27.415 "superblock": true, 00:46:27.415 "num_base_bdevs": 2, 00:46:27.415 "num_base_bdevs_discovered": 1, 00:46:27.415 "num_base_bdevs_operational": 1, 00:46:27.415 "base_bdevs_list": [ 00:46:27.415 { 00:46:27.415 "name": null, 00:46:27.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:27.415 "is_configured": false, 00:46:27.415 "data_offset": 0, 00:46:27.415 "data_size": 7936 00:46:27.415 }, 00:46:27.415 { 00:46:27.415 "name": "pt2", 00:46:27.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:27.415 "is_configured": true, 00:46:27.415 "data_offset": 256, 00:46:27.415 "data_size": 7936 00:46:27.415 } 00:46:27.415 ] 00:46:27.415 }' 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:27.415 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.983 [2024-12-09 05:37:14.707962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:27.983 [2024-12-09 05:37:14.708000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:27.983 [2024-12-09 05:37:14.708122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:27.983 [2024-12-09 05:37:14.708254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:27.983 [2024-12-09 05:37:14.708274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.983 [2024-12-09 05:37:14.783986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:46:27.983 [2024-12-09 05:37:14.784103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:27.983 [2024-12-09 05:37:14.784148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:46:27.983 [2024-12-09 05:37:14.784197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:27.983 [2024-12-09 05:37:14.787199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:27.983 [2024-12-09 05:37:14.787441] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:46:27.983 [2024-12-09 05:37:14.787537] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:46:27.983 [2024-12-09 05:37:14.787609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:46:27.983 [2024-12-09 05:37:14.787755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:46:27.983 [2024-12-09 05:37:14.787817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:27.983 [2024-12-09 05:37:14.787935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:46:27.983 [2024-12-09 05:37:14.788105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:46:27.983 [2024-12-09 05:37:14.788137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:46:27.983 [2024-12-09 05:37:14.788345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:27.983 pt2 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:27.983 "name": "raid_bdev1", 00:46:27.983 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:27.983 "strip_size_kb": 0, 00:46:27.983 "state": "online", 00:46:27.983 "raid_level": "raid1", 00:46:27.983 "superblock": true, 00:46:27.983 "num_base_bdevs": 2, 00:46:27.983 "num_base_bdevs_discovered": 1, 00:46:27.983 "num_base_bdevs_operational": 1, 00:46:27.983 "base_bdevs_list": [ 00:46:27.983 { 00:46:27.983 "name": null, 00:46:27.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:27.983 "is_configured": false, 00:46:27.983 "data_offset": 256, 00:46:27.983 "data_size": 7936 00:46:27.983 }, 00:46:27.983 { 00:46:27.983 "name": "pt2", 00:46:27.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:27.983 "is_configured": true, 00:46:27.983 "data_offset": 256, 00:46:27.983 "data_size": 7936 00:46:27.983 } 00:46:27.983 ] 00:46:27.983 }' 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:27.983 05:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:28.551 [2024-12-09 05:37:15.336506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:28.551 [2024-12-09 05:37:15.336547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:28.551 [2024-12-09 05:37:15.336656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:28.551 [2024-12-09 05:37:15.336740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:28.551 [2024-12-09 05:37:15.336756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.551 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:28.551 [2024-12-09 05:37:15.400497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:46:28.551 [2024-12-09 05:37:15.400579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:28.551 [2024-12-09 05:37:15.400611] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:46:28.551 [2024-12-09 05:37:15.400626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:28.551 [2024-12-09 05:37:15.403729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:28.551 [2024-12-09 05:37:15.403834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:46:28.551 [2024-12-09 05:37:15.403919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:46:28.551 [2024-12-09 05:37:15.403981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:46:28.551 [2024-12-09 05:37:15.404219] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:46:28.551 [2024-12-09 05:37:15.404238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:28.551 [2024-12-09 05:37:15.404263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:46:28.551 [2024-12-09 05:37:15.404344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:46:28.551 [2024-12-09 05:37:15.404487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:46:28.551 [2024-12-09 05:37:15.404504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:28.551 [2024-12-09 05:37:15.404587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:46:28.551 [2024-12-09 05:37:15.404747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:46:28.551 [2024-12-09 05:37:15.404767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:46:28.552 [2024-12-09 05:37:15.405015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:28.552 pt1 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:28.552 "name": "raid_bdev1", 00:46:28.552 "uuid": "b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa", 00:46:28.552 "strip_size_kb": 0, 00:46:28.552 "state": "online", 00:46:28.552 "raid_level": "raid1", 00:46:28.552 "superblock": true, 00:46:28.552 "num_base_bdevs": 2, 00:46:28.552 "num_base_bdevs_discovered": 1, 00:46:28.552 "num_base_bdevs_operational": 1, 00:46:28.552 "base_bdevs_list": [ 00:46:28.552 { 00:46:28.552 "name": null, 00:46:28.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:28.552 "is_configured": false, 00:46:28.552 "data_offset": 256, 00:46:28.552 "data_size": 7936 00:46:28.552 }, 00:46:28.552 { 00:46:28.552 "name": "pt2", 00:46:28.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:46:28.552 "is_configured": true, 00:46:28.552 "data_offset": 256, 00:46:28.552 "data_size": 7936 00:46:28.552 } 00:46:28.552 ] 00:46:28.552 }' 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:28.552 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:29.118 [2024-12-09 05:37:15.981090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:29.118 05:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa '!=' b7d9eb72-3605-4f88-bce4-5a6b54a1b9fa ']' 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88021 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88021 ']' 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88021 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88021 00:46:29.118 killing process with pid 88021 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88021' 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88021 00:46:29.118 [2024-12-09 05:37:16.051981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:46:29.118 05:37:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88021 00:46:29.118 [2024-12-09 05:37:16.052092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:29.118 [2024-12-09 05:37:16.052162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:29.118 [2024-12-09 05:37:16.052188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:46:29.375 [2024-12-09 05:37:16.253589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:46:30.750 05:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:46:30.750 00:46:30.750 real 0m6.894s 00:46:30.750 user 0m10.785s 00:46:30.750 sys 0m1.021s 00:46:30.750 05:37:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:30.750 ************************************ 00:46:30.750 END TEST raid_superblock_test_md_separate 00:46:30.750 ************************************ 00:46:30.750 05:37:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:30.750 05:37:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:46:30.750 05:37:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:46:30.750 05:37:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:46:30.750 05:37:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:30.750 05:37:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:46:30.750 ************************************ 00:46:30.750 START TEST raid_rebuild_test_sb_md_separate 00:46:30.750 ************************************ 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88346 00:46:30.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88346 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88346 ']' 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:30.750 05:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:30.750 I/O size of 3145728 is greater than zero copy threshold (65536). 00:46:30.750 Zero copy mechanism will not be used. 00:46:30.750 [2024-12-09 05:37:17.620794] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:30.750 [2024-12-09 05:37:17.620994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88346 ] 00:46:31.009 [2024-12-09 05:37:17.812826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:31.009 [2024-12-09 05:37:17.948864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:31.267 [2024-12-09 05:37:18.149046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:31.267 [2024-12-09 05:37:18.149478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.833 BaseBdev1_malloc 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.833 [2024-12-09 05:37:18.644649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:46:31.833 [2024-12-09 05:37:18.644742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:31.833 [2024-12-09 05:37:18.644829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:46:31.833 [2024-12-09 05:37:18.644869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:31.833 [2024-12-09 05:37:18.647624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:31.833 [2024-12-09 05:37:18.647672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:46:31.833 BaseBdev1 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.833 BaseBdev2_malloc 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.833 [2024-12-09 05:37:18.696261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:46:31.833 [2024-12-09 05:37:18.696572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:31.833 [2024-12-09 05:37:18.696645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:46:31.833 [2024-12-09 05:37:18.696896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:31.833 [2024-12-09 05:37:18.699718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:31.833 [2024-12-09 05:37:18.699964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:46:31.833 BaseBdev2 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.833 spare_malloc 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.833 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.834 spare_delay 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.834 [2024-12-09 05:37:18.774568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:46:31.834 [2024-12-09 05:37:18.774667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:31.834 [2024-12-09 05:37:18.774700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:46:31.834 [2024-12-09 05:37:18.774720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:31.834 [2024-12-09 05:37:18.777505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:31.834 [2024-12-09 05:37:18.777569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:46:31.834 spare 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:31.834 [2024-12-09 05:37:18.782628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:31.834 [2024-12-09 05:37:18.785248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:46:31.834 [2024-12-09 05:37:18.785479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:46:31.834 [2024-12-09 05:37:18.785501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:31.834 [2024-12-09 05:37:18.785591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:46:31.834 [2024-12-09 05:37:18.785747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:46:31.834 [2024-12-09 05:37:18.785809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:46:31.834 [2024-12-09 05:37:18.785932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.834 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:32.092 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:32.092 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:32.092 "name": "raid_bdev1", 00:46:32.092 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:32.092 "strip_size_kb": 0, 00:46:32.092 "state": "online", 00:46:32.092 "raid_level": "raid1", 00:46:32.092 "superblock": true, 00:46:32.092 "num_base_bdevs": 2, 00:46:32.092 "num_base_bdevs_discovered": 2, 00:46:32.092 "num_base_bdevs_operational": 2, 00:46:32.092 "base_bdevs_list": [ 00:46:32.092 { 00:46:32.092 "name": "BaseBdev1", 00:46:32.092 "uuid": "9b6d6a65-11c3-56fa-bd10-f036cd6f70d4", 00:46:32.092 "is_configured": true, 00:46:32.092 "data_offset": 256, 00:46:32.092 "data_size": 7936 00:46:32.092 }, 00:46:32.092 { 00:46:32.092 "name": "BaseBdev2", 00:46:32.092 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:32.092 "is_configured": true, 00:46:32.092 "data_offset": 256, 00:46:32.092 "data_size": 7936 00:46:32.092 } 00:46:32.092 ] 00:46:32.092 }' 00:46:32.092 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:32.092 05:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:32.351 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:46:32.351 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:32.351 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:32.351 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:46:32.351 [2024-12-09 05:37:19.311241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:46:32.609 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:46:32.610 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:46:32.869 [2024-12-09 05:37:19.623014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:46:32.869 /dev/nbd0 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:32.869 1+0 records in 00:46:32.869 1+0 records out 00:46:32.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348795 s, 11.7 MB/s 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:46:32.869 05:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:46:33.838 7936+0 records in 00:46:33.838 7936+0 records out 00:46:33.838 32505856 bytes (33 MB, 31 MiB) copied, 1.01162 s, 32.1 MB/s 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:33.838 05:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:34.098 [2024-12-09 05:37:21.009642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:34.098 [2024-12-09 05:37:21.025765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:34.098 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:34.099 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:34.358 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:34.358 "name": "raid_bdev1", 00:46:34.358 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:34.358 "strip_size_kb": 0, 00:46:34.358 "state": "online", 00:46:34.358 "raid_level": "raid1", 00:46:34.358 "superblock": true, 00:46:34.358 "num_base_bdevs": 2, 00:46:34.358 "num_base_bdevs_discovered": 1, 00:46:34.358 "num_base_bdevs_operational": 1, 00:46:34.358 "base_bdevs_list": [ 00:46:34.358 { 00:46:34.358 "name": null, 00:46:34.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:34.358 "is_configured": false, 00:46:34.358 "data_offset": 0, 00:46:34.358 "data_size": 7936 00:46:34.358 }, 00:46:34.358 { 00:46:34.358 "name": "BaseBdev2", 00:46:34.358 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:34.358 "is_configured": true, 00:46:34.358 "data_offset": 256, 00:46:34.358 "data_size": 7936 00:46:34.358 } 00:46:34.358 ] 00:46:34.358 }' 00:46:34.358 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:34.358 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:34.617 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:46:34.617 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.617 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:34.617 [2024-12-09 05:37:21.562098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:34.617 [2024-12-09 05:37:21.578649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:46:34.617 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:34.617 05:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:46:34.617 [2024-12-09 05:37:21.582835] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:35.994 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:35.994 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:35.995 "name": "raid_bdev1", 00:46:35.995 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:35.995 "strip_size_kb": 0, 00:46:35.995 "state": "online", 00:46:35.995 "raid_level": "raid1", 00:46:35.995 "superblock": true, 00:46:35.995 "num_base_bdevs": 2, 00:46:35.995 "num_base_bdevs_discovered": 2, 00:46:35.995 "num_base_bdevs_operational": 2, 00:46:35.995 "process": { 00:46:35.995 "type": "rebuild", 00:46:35.995 "target": "spare", 00:46:35.995 "progress": { 00:46:35.995 "blocks": 2560, 00:46:35.995 "percent": 32 00:46:35.995 } 00:46:35.995 }, 00:46:35.995 "base_bdevs_list": [ 00:46:35.995 { 00:46:35.995 "name": "spare", 00:46:35.995 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:35.995 "is_configured": true, 00:46:35.995 "data_offset": 256, 00:46:35.995 "data_size": 7936 00:46:35.995 }, 00:46:35.995 { 00:46:35.995 "name": "BaseBdev2", 00:46:35.995 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:35.995 "is_configured": true, 00:46:35.995 "data_offset": 256, 00:46:35.995 "data_size": 7936 00:46:35.995 } 00:46:35.995 ] 00:46:35.995 }' 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:35.995 [2024-12-09 05:37:22.762238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:35.995 [2024-12-09 05:37:22.793160] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:46:35.995 [2024-12-09 05:37:22.793447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:35.995 [2024-12-09 05:37:22.793629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:35.995 [2024-12-09 05:37:22.793787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:35.995 "name": "raid_bdev1", 00:46:35.995 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:35.995 "strip_size_kb": 0, 00:46:35.995 "state": "online", 00:46:35.995 "raid_level": "raid1", 00:46:35.995 "superblock": true, 00:46:35.995 "num_base_bdevs": 2, 00:46:35.995 "num_base_bdevs_discovered": 1, 00:46:35.995 "num_base_bdevs_operational": 1, 00:46:35.995 "base_bdevs_list": [ 00:46:35.995 { 00:46:35.995 "name": null, 00:46:35.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:35.995 "is_configured": false, 00:46:35.995 "data_offset": 0, 00:46:35.995 "data_size": 7936 00:46:35.995 }, 00:46:35.995 { 00:46:35.995 "name": "BaseBdev2", 00:46:35.995 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:35.995 "is_configured": true, 00:46:35.995 "data_offset": 256, 00:46:35.995 "data_size": 7936 00:46:35.995 } 00:46:35.995 ] 00:46:35.995 }' 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:35.995 05:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:36.563 "name": "raid_bdev1", 00:46:36.563 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:36.563 "strip_size_kb": 0, 00:46:36.563 "state": "online", 00:46:36.563 "raid_level": "raid1", 00:46:36.563 "superblock": true, 00:46:36.563 "num_base_bdevs": 2, 00:46:36.563 "num_base_bdevs_discovered": 1, 00:46:36.563 "num_base_bdevs_operational": 1, 00:46:36.563 "base_bdevs_list": [ 00:46:36.563 { 00:46:36.563 "name": null, 00:46:36.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:36.563 "is_configured": false, 00:46:36.563 "data_offset": 0, 00:46:36.563 "data_size": 7936 00:46:36.563 }, 00:46:36.563 { 00:46:36.563 "name": "BaseBdev2", 00:46:36.563 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:36.563 "is_configured": true, 00:46:36.563 "data_offset": 256, 00:46:36.563 "data_size": 7936 00:46:36.563 } 00:46:36.563 ] 00:46:36.563 }' 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:36.563 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:36.821 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:36.821 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:46:36.821 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.821 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:36.821 [2024-12-09 05:37:23.549806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:36.822 [2024-12-09 05:37:23.565430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:46:36.822 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.822 05:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:46:36.822 [2024-12-09 05:37:23.568467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:37.756 "name": "raid_bdev1", 00:46:37.756 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:37.756 "strip_size_kb": 0, 00:46:37.756 "state": "online", 00:46:37.756 "raid_level": "raid1", 00:46:37.756 "superblock": true, 00:46:37.756 "num_base_bdevs": 2, 00:46:37.756 "num_base_bdevs_discovered": 2, 00:46:37.756 "num_base_bdevs_operational": 2, 00:46:37.756 "process": { 00:46:37.756 "type": "rebuild", 00:46:37.756 "target": "spare", 00:46:37.756 "progress": { 00:46:37.756 "blocks": 2560, 00:46:37.756 "percent": 32 00:46:37.756 } 00:46:37.756 }, 00:46:37.756 "base_bdevs_list": [ 00:46:37.756 { 00:46:37.756 "name": "spare", 00:46:37.756 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:37.756 "is_configured": true, 00:46:37.756 "data_offset": 256, 00:46:37.756 "data_size": 7936 00:46:37.756 }, 00:46:37.756 { 00:46:37.756 "name": "BaseBdev2", 00:46:37.756 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:37.756 "is_configured": true, 00:46:37.756 "data_offset": 256, 00:46:37.756 "data_size": 7936 00:46:37.756 } 00:46:37.756 ] 00:46:37.756 }' 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:37.756 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:46:38.014 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=780 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:38.014 "name": "raid_bdev1", 00:46:38.014 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:38.014 "strip_size_kb": 0, 00:46:38.014 "state": "online", 00:46:38.014 "raid_level": "raid1", 00:46:38.014 "superblock": true, 00:46:38.014 "num_base_bdevs": 2, 00:46:38.014 "num_base_bdevs_discovered": 2, 00:46:38.014 "num_base_bdevs_operational": 2, 00:46:38.014 "process": { 00:46:38.014 "type": "rebuild", 00:46:38.014 "target": "spare", 00:46:38.014 "progress": { 00:46:38.014 "blocks": 2816, 00:46:38.014 "percent": 35 00:46:38.014 } 00:46:38.014 }, 00:46:38.014 "base_bdevs_list": [ 00:46:38.014 { 00:46:38.014 "name": "spare", 00:46:38.014 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:38.014 "is_configured": true, 00:46:38.014 "data_offset": 256, 00:46:38.014 "data_size": 7936 00:46:38.014 }, 00:46:38.014 { 00:46:38.014 "name": "BaseBdev2", 00:46:38.014 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:38.014 "is_configured": true, 00:46:38.014 "data_offset": 256, 00:46:38.014 "data_size": 7936 00:46:38.014 } 00:46:38.014 ] 00:46:38.014 }' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:38.014 05:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:46:38.958 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:46:38.958 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:38.958 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:38.958 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:38.958 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:38.958 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:39.216 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:39.216 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.216 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:39.217 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:39.217 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.217 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:39.217 "name": "raid_bdev1", 00:46:39.217 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:39.217 "strip_size_kb": 0, 00:46:39.217 "state": "online", 00:46:39.217 "raid_level": "raid1", 00:46:39.217 "superblock": true, 00:46:39.217 "num_base_bdevs": 2, 00:46:39.217 "num_base_bdevs_discovered": 2, 00:46:39.217 "num_base_bdevs_operational": 2, 00:46:39.217 "process": { 00:46:39.217 "type": "rebuild", 00:46:39.217 "target": "spare", 00:46:39.217 "progress": { 00:46:39.217 "blocks": 5888, 00:46:39.217 "percent": 74 00:46:39.217 } 00:46:39.217 }, 00:46:39.217 "base_bdevs_list": [ 00:46:39.217 { 00:46:39.217 "name": "spare", 00:46:39.217 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:39.217 "is_configured": true, 00:46:39.217 "data_offset": 256, 00:46:39.217 "data_size": 7936 00:46:39.217 }, 00:46:39.217 { 00:46:39.217 "name": "BaseBdev2", 00:46:39.217 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:39.217 "is_configured": true, 00:46:39.217 "data_offset": 256, 00:46:39.217 "data_size": 7936 00:46:39.217 } 00:46:39.217 ] 00:46:39.217 }' 00:46:39.217 05:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:39.217 05:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:39.217 05:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:39.217 05:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:39.217 05:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:46:39.784 [2024-12-09 05:37:26.694926] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:46:39.784 [2024-12-09 05:37:26.695327] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:46:39.784 [2024-12-09 05:37:26.695520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:40.351 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:40.352 "name": "raid_bdev1", 00:46:40.352 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:40.352 "strip_size_kb": 0, 00:46:40.352 "state": "online", 00:46:40.352 "raid_level": "raid1", 00:46:40.352 "superblock": true, 00:46:40.352 "num_base_bdevs": 2, 00:46:40.352 "num_base_bdevs_discovered": 2, 00:46:40.352 "num_base_bdevs_operational": 2, 00:46:40.352 "base_bdevs_list": [ 00:46:40.352 { 00:46:40.352 "name": "spare", 00:46:40.352 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:40.352 "is_configured": true, 00:46:40.352 "data_offset": 256, 00:46:40.352 "data_size": 7936 00:46:40.352 }, 00:46:40.352 { 00:46:40.352 "name": "BaseBdev2", 00:46:40.352 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:40.352 "is_configured": true, 00:46:40.352 "data_offset": 256, 00:46:40.352 "data_size": 7936 00:46:40.352 } 00:46:40.352 ] 00:46:40.352 }' 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:40.352 "name": "raid_bdev1", 00:46:40.352 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:40.352 "strip_size_kb": 0, 00:46:40.352 "state": "online", 00:46:40.352 "raid_level": "raid1", 00:46:40.352 "superblock": true, 00:46:40.352 "num_base_bdevs": 2, 00:46:40.352 "num_base_bdevs_discovered": 2, 00:46:40.352 "num_base_bdevs_operational": 2, 00:46:40.352 "base_bdevs_list": [ 00:46:40.352 { 00:46:40.352 "name": "spare", 00:46:40.352 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:40.352 "is_configured": true, 00:46:40.352 "data_offset": 256, 00:46:40.352 "data_size": 7936 00:46:40.352 }, 00:46:40.352 { 00:46:40.352 "name": "BaseBdev2", 00:46:40.352 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:40.352 "is_configured": true, 00:46:40.352 "data_offset": 256, 00:46:40.352 "data_size": 7936 00:46:40.352 } 00:46:40.352 ] 00:46:40.352 }' 00:46:40.352 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:40.610 "name": "raid_bdev1", 00:46:40.610 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:40.610 "strip_size_kb": 0, 00:46:40.610 "state": "online", 00:46:40.610 "raid_level": "raid1", 00:46:40.610 "superblock": true, 00:46:40.610 "num_base_bdevs": 2, 00:46:40.610 "num_base_bdevs_discovered": 2, 00:46:40.610 "num_base_bdevs_operational": 2, 00:46:40.610 "base_bdevs_list": [ 00:46:40.610 { 00:46:40.610 "name": "spare", 00:46:40.610 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:40.610 "is_configured": true, 00:46:40.610 "data_offset": 256, 00:46:40.610 "data_size": 7936 00:46:40.610 }, 00:46:40.610 { 00:46:40.610 "name": "BaseBdev2", 00:46:40.610 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:40.610 "is_configured": true, 00:46:40.610 "data_offset": 256, 00:46:40.610 "data_size": 7936 00:46:40.610 } 00:46:40.610 ] 00:46:40.610 }' 00:46:40.610 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:40.611 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:41.240 [2024-12-09 05:37:27.948652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:41.240 [2024-12-09 05:37:27.948696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:41.240 [2024-12-09 05:37:27.948834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:41.240 [2024-12-09 05:37:27.948974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:41.240 [2024-12-09 05:37:27.948994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:41.240 05:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:41.240 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:46:41.497 /dev/nbd0 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:41.498 1+0 records in 00:46:41.498 1+0 records out 00:46:41.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282382 s, 14.5 MB/s 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:41.498 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:46:41.756 /dev/nbd1 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:46:42.013 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:42.014 1+0 records in 00:46:42.014 1+0 records out 00:46:42.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434317 s, 9.4 MB/s 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:42.014 05:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:42.578 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:42.835 [2024-12-09 05:37:29.670696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:46:42.835 [2024-12-09 05:37:29.670794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:42.835 [2024-12-09 05:37:29.670844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:46:42.835 [2024-12-09 05:37:29.670860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:42.835 [2024-12-09 05:37:29.673895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:42.835 [2024-12-09 05:37:29.673956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:46:42.835 [2024-12-09 05:37:29.674063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:46:42.835 [2024-12-09 05:37:29.674181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:42.835 [2024-12-09 05:37:29.674423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:46:42.835 spare 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:42.835 [2024-12-09 05:37:29.774560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:46:42.835 [2024-12-09 05:37:29.774605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:46:42.835 [2024-12-09 05:37:29.774740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:46:42.835 [2024-12-09 05:37:29.774961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:46:42.835 [2024-12-09 05:37:29.774981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:46:42.835 [2024-12-09 05:37:29.775175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:42.835 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.100 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:43.100 "name": "raid_bdev1", 00:46:43.100 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:43.100 "strip_size_kb": 0, 00:46:43.100 "state": "online", 00:46:43.100 "raid_level": "raid1", 00:46:43.100 "superblock": true, 00:46:43.100 "num_base_bdevs": 2, 00:46:43.100 "num_base_bdevs_discovered": 2, 00:46:43.100 "num_base_bdevs_operational": 2, 00:46:43.100 "base_bdevs_list": [ 00:46:43.100 { 00:46:43.100 "name": "spare", 00:46:43.100 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:43.100 "is_configured": true, 00:46:43.100 "data_offset": 256, 00:46:43.100 "data_size": 7936 00:46:43.100 }, 00:46:43.100 { 00:46:43.100 "name": "BaseBdev2", 00:46:43.100 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:43.100 "is_configured": true, 00:46:43.100 "data_offset": 256, 00:46:43.100 "data_size": 7936 00:46:43.100 } 00:46:43.100 ] 00:46:43.100 }' 00:46:43.100 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:43.100 05:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.362 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:43.620 "name": "raid_bdev1", 00:46:43.620 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:43.620 "strip_size_kb": 0, 00:46:43.620 "state": "online", 00:46:43.620 "raid_level": "raid1", 00:46:43.620 "superblock": true, 00:46:43.620 "num_base_bdevs": 2, 00:46:43.620 "num_base_bdevs_discovered": 2, 00:46:43.620 "num_base_bdevs_operational": 2, 00:46:43.620 "base_bdevs_list": [ 00:46:43.620 { 00:46:43.620 "name": "spare", 00:46:43.620 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:43.620 "is_configured": true, 00:46:43.620 "data_offset": 256, 00:46:43.620 "data_size": 7936 00:46:43.620 }, 00:46:43.620 { 00:46:43.620 "name": "BaseBdev2", 00:46:43.620 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:43.620 "is_configured": true, 00:46:43.620 "data_offset": 256, 00:46:43.620 "data_size": 7936 00:46:43.620 } 00:46:43.620 ] 00:46:43.620 }' 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:46:43.620 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:43.621 [2024-12-09 05:37:30.551589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:43.621 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.878 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:43.878 "name": "raid_bdev1", 00:46:43.878 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:43.878 "strip_size_kb": 0, 00:46:43.878 "state": "online", 00:46:43.878 "raid_level": "raid1", 00:46:43.878 "superblock": true, 00:46:43.878 "num_base_bdevs": 2, 00:46:43.878 "num_base_bdevs_discovered": 1, 00:46:43.878 "num_base_bdevs_operational": 1, 00:46:43.878 "base_bdevs_list": [ 00:46:43.878 { 00:46:43.878 "name": null, 00:46:43.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:43.878 "is_configured": false, 00:46:43.878 "data_offset": 0, 00:46:43.878 "data_size": 7936 00:46:43.878 }, 00:46:43.878 { 00:46:43.878 "name": "BaseBdev2", 00:46:43.878 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:43.878 "is_configured": true, 00:46:43.878 "data_offset": 256, 00:46:43.878 "data_size": 7936 00:46:43.878 } 00:46:43.878 ] 00:46:43.878 }' 00:46:43.878 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:43.878 05:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:44.135 05:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:46:44.135 05:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:44.135 05:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:44.135 [2024-12-09 05:37:31.095960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:44.135 [2024-12-09 05:37:31.096452] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:46:44.135 [2024-12-09 05:37:31.096488] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:46:44.135 [2024-12-09 05:37:31.096553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:44.393 [2024-12-09 05:37:31.110380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:46:44.393 05:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:44.393 05:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:46:44.393 [2024-12-09 05:37:31.113102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:45.328 "name": "raid_bdev1", 00:46:45.328 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:45.328 "strip_size_kb": 0, 00:46:45.328 "state": "online", 00:46:45.328 "raid_level": "raid1", 00:46:45.328 "superblock": true, 00:46:45.328 "num_base_bdevs": 2, 00:46:45.328 "num_base_bdevs_discovered": 2, 00:46:45.328 "num_base_bdevs_operational": 2, 00:46:45.328 "process": { 00:46:45.328 "type": "rebuild", 00:46:45.328 "target": "spare", 00:46:45.328 "progress": { 00:46:45.328 "blocks": 2560, 00:46:45.328 "percent": 32 00:46:45.328 } 00:46:45.328 }, 00:46:45.328 "base_bdevs_list": [ 00:46:45.328 { 00:46:45.328 "name": "spare", 00:46:45.328 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:45.328 "is_configured": true, 00:46:45.328 "data_offset": 256, 00:46:45.328 "data_size": 7936 00:46:45.328 }, 00:46:45.328 { 00:46:45.328 "name": "BaseBdev2", 00:46:45.328 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:45.328 "is_configured": true, 00:46:45.328 "data_offset": 256, 00:46:45.328 "data_size": 7936 00:46:45.328 } 00:46:45.328 ] 00:46:45.328 }' 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.328 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:45.328 [2024-12-09 05:37:32.283334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:45.586 [2024-12-09 05:37:32.323922] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:46:45.586 [2024-12-09 05:37:32.324194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:45.586 [2024-12-09 05:37:32.324467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:45.586 [2024-12-09 05:37:32.324510] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.586 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:45.587 "name": "raid_bdev1", 00:46:45.587 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:45.587 "strip_size_kb": 0, 00:46:45.587 "state": "online", 00:46:45.587 "raid_level": "raid1", 00:46:45.587 "superblock": true, 00:46:45.587 "num_base_bdevs": 2, 00:46:45.587 "num_base_bdevs_discovered": 1, 00:46:45.587 "num_base_bdevs_operational": 1, 00:46:45.587 "base_bdevs_list": [ 00:46:45.587 { 00:46:45.587 "name": null, 00:46:45.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:45.587 "is_configured": false, 00:46:45.587 "data_offset": 0, 00:46:45.587 "data_size": 7936 00:46:45.587 }, 00:46:45.587 { 00:46:45.587 "name": "BaseBdev2", 00:46:45.587 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:45.587 "is_configured": true, 00:46:45.587 "data_offset": 256, 00:46:45.587 "data_size": 7936 00:46:45.587 } 00:46:45.587 ] 00:46:45.587 }' 00:46:45.587 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:45.587 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:46.153 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:46:46.153 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.153 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:46.153 [2024-12-09 05:37:32.890089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:46:46.153 [2024-12-09 05:37:32.890357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:46.153 [2024-12-09 05:37:32.890539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:46:46.153 [2024-12-09 05:37:32.890704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:46.153 [2024-12-09 05:37:32.891407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:46.153 [2024-12-09 05:37:32.891458] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:46:46.153 [2024-12-09 05:37:32.891552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:46:46.153 [2024-12-09 05:37:32.891578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:46:46.153 [2024-12-09 05:37:32.891594] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:46:46.153 [2024-12-09 05:37:32.891628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:46.153 [2024-12-09 05:37:32.906012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:46:46.153 spare 00:46:46.153 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.153 05:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:46:46.153 [2024-12-09 05:37:32.908742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:47.089 "name": "raid_bdev1", 00:46:47.089 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:47.089 "strip_size_kb": 0, 00:46:47.089 "state": "online", 00:46:47.089 "raid_level": "raid1", 00:46:47.089 "superblock": true, 00:46:47.089 "num_base_bdevs": 2, 00:46:47.089 "num_base_bdevs_discovered": 2, 00:46:47.089 "num_base_bdevs_operational": 2, 00:46:47.089 "process": { 00:46:47.089 "type": "rebuild", 00:46:47.089 "target": "spare", 00:46:47.089 "progress": { 00:46:47.089 "blocks": 2560, 00:46:47.089 "percent": 32 00:46:47.089 } 00:46:47.089 }, 00:46:47.089 "base_bdevs_list": [ 00:46:47.089 { 00:46:47.089 "name": "spare", 00:46:47.089 "uuid": "79edacc4-fd23-541d-871c-d4541f2b050a", 00:46:47.089 "is_configured": true, 00:46:47.089 "data_offset": 256, 00:46:47.089 "data_size": 7936 00:46:47.089 }, 00:46:47.089 { 00:46:47.089 "name": "BaseBdev2", 00:46:47.089 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:47.089 "is_configured": true, 00:46:47.089 "data_offset": 256, 00:46:47.089 "data_size": 7936 00:46:47.089 } 00:46:47.089 ] 00:46:47.089 }' 00:46:47.089 05:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:47.089 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:47.089 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.347 [2024-12-09 05:37:34.087180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:47.347 [2024-12-09 05:37:34.119052] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:46:47.347 [2024-12-09 05:37:34.119183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:47.347 [2024-12-09 05:37:34.119211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:46:47.347 [2024-12-09 05:37:34.119223] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:47.347 "name": "raid_bdev1", 00:46:47.347 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:47.347 "strip_size_kb": 0, 00:46:47.347 "state": "online", 00:46:47.347 "raid_level": "raid1", 00:46:47.347 "superblock": true, 00:46:47.347 "num_base_bdevs": 2, 00:46:47.347 "num_base_bdevs_discovered": 1, 00:46:47.347 "num_base_bdevs_operational": 1, 00:46:47.347 "base_bdevs_list": [ 00:46:47.347 { 00:46:47.347 "name": null, 00:46:47.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:47.347 "is_configured": false, 00:46:47.347 "data_offset": 0, 00:46:47.347 "data_size": 7936 00:46:47.347 }, 00:46:47.347 { 00:46:47.347 "name": "BaseBdev2", 00:46:47.347 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:47.347 "is_configured": true, 00:46:47.347 "data_offset": 256, 00:46:47.347 "data_size": 7936 00:46:47.347 } 00:46:47.347 ] 00:46:47.347 }' 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:47.347 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:47.915 "name": "raid_bdev1", 00:46:47.915 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:47.915 "strip_size_kb": 0, 00:46:47.915 "state": "online", 00:46:47.915 "raid_level": "raid1", 00:46:47.915 "superblock": true, 00:46:47.915 "num_base_bdevs": 2, 00:46:47.915 "num_base_bdevs_discovered": 1, 00:46:47.915 "num_base_bdevs_operational": 1, 00:46:47.915 "base_bdevs_list": [ 00:46:47.915 { 00:46:47.915 "name": null, 00:46:47.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:47.915 "is_configured": false, 00:46:47.915 "data_offset": 0, 00:46:47.915 "data_size": 7936 00:46:47.915 }, 00:46:47.915 { 00:46:47.915 "name": "BaseBdev2", 00:46:47.915 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:47.915 "is_configured": true, 00:46:47.915 "data_offset": 256, 00:46:47.915 "data_size": 7936 00:46:47.915 } 00:46:47.915 ] 00:46:47.915 }' 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:47.915 [2024-12-09 05:37:34.855668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:46:47.915 [2024-12-09 05:37:34.855991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:47.915 [2024-12-09 05:37:34.856042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:46:47.915 [2024-12-09 05:37:34.856060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:47.915 [2024-12-09 05:37:34.856425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:47.915 [2024-12-09 05:37:34.856460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:46:47.915 [2024-12-09 05:37:34.856551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:46:47.915 [2024-12-09 05:37:34.856572] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:46:47.915 [2024-12-09 05:37:34.856585] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:46:47.915 [2024-12-09 05:37:34.856598] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:46:47.915 BaseBdev1 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.915 05:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:49.351 "name": "raid_bdev1", 00:46:49.351 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:49.351 "strip_size_kb": 0, 00:46:49.351 "state": "online", 00:46:49.351 "raid_level": "raid1", 00:46:49.351 "superblock": true, 00:46:49.351 "num_base_bdevs": 2, 00:46:49.351 "num_base_bdevs_discovered": 1, 00:46:49.351 "num_base_bdevs_operational": 1, 00:46:49.351 "base_bdevs_list": [ 00:46:49.351 { 00:46:49.351 "name": null, 00:46:49.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:49.351 "is_configured": false, 00:46:49.351 "data_offset": 0, 00:46:49.351 "data_size": 7936 00:46:49.351 }, 00:46:49.351 { 00:46:49.351 "name": "BaseBdev2", 00:46:49.351 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:49.351 "is_configured": true, 00:46:49.351 "data_offset": 256, 00:46:49.351 "data_size": 7936 00:46:49.351 } 00:46:49.351 ] 00:46:49.351 }' 00:46:49.351 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:49.352 05:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:49.610 "name": "raid_bdev1", 00:46:49.610 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:49.610 "strip_size_kb": 0, 00:46:49.610 "state": "online", 00:46:49.610 "raid_level": "raid1", 00:46:49.610 "superblock": true, 00:46:49.610 "num_base_bdevs": 2, 00:46:49.610 "num_base_bdevs_discovered": 1, 00:46:49.610 "num_base_bdevs_operational": 1, 00:46:49.610 "base_bdevs_list": [ 00:46:49.610 { 00:46:49.610 "name": null, 00:46:49.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:49.610 "is_configured": false, 00:46:49.610 "data_offset": 0, 00:46:49.610 "data_size": 7936 00:46:49.610 }, 00:46:49.610 { 00:46:49.610 "name": "BaseBdev2", 00:46:49.610 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:49.610 "is_configured": true, 00:46:49.610 "data_offset": 256, 00:46:49.610 "data_size": 7936 00:46:49.610 } 00:46:49.610 ] 00:46:49.610 }' 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:49.610 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:49.611 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:49.870 [2024-12-09 05:37:36.600875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:49.870 [2024-12-09 05:37:36.601189] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:46:49.870 [2024-12-09 05:37:36.601214] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:46:49.870 request: 00:46:49.870 { 00:46:49.870 "base_bdev": "BaseBdev1", 00:46:49.870 "raid_bdev": "raid_bdev1", 00:46:49.870 "method": "bdev_raid_add_base_bdev", 00:46:49.870 "req_id": 1 00:46:49.870 } 00:46:49.870 Got JSON-RPC error response 00:46:49.870 response: 00:46:49.870 { 00:46:49.870 "code": -22, 00:46:49.870 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:46:49.870 } 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:49.870 05:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:50.807 "name": "raid_bdev1", 00:46:50.807 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:50.807 "strip_size_kb": 0, 00:46:50.807 "state": "online", 00:46:50.807 "raid_level": "raid1", 00:46:50.807 "superblock": true, 00:46:50.807 "num_base_bdevs": 2, 00:46:50.807 "num_base_bdevs_discovered": 1, 00:46:50.807 "num_base_bdevs_operational": 1, 00:46:50.807 "base_bdevs_list": [ 00:46:50.807 { 00:46:50.807 "name": null, 00:46:50.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:50.807 "is_configured": false, 00:46:50.807 "data_offset": 0, 00:46:50.807 "data_size": 7936 00:46:50.807 }, 00:46:50.807 { 00:46:50.807 "name": "BaseBdev2", 00:46:50.807 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:50.807 "is_configured": true, 00:46:50.807 "data_offset": 256, 00:46:50.807 "data_size": 7936 00:46:50.807 } 00:46:50.807 ] 00:46:50.807 }' 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:50.807 05:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:46:51.374 "name": "raid_bdev1", 00:46:51.374 "uuid": "4960cf30-6e18-40b5-9b6a-a7f85ccda04e", 00:46:51.374 "strip_size_kb": 0, 00:46:51.374 "state": "online", 00:46:51.374 "raid_level": "raid1", 00:46:51.374 "superblock": true, 00:46:51.374 "num_base_bdevs": 2, 00:46:51.374 "num_base_bdevs_discovered": 1, 00:46:51.374 "num_base_bdevs_operational": 1, 00:46:51.374 "base_bdevs_list": [ 00:46:51.374 { 00:46:51.374 "name": null, 00:46:51.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:51.374 "is_configured": false, 00:46:51.374 "data_offset": 0, 00:46:51.374 "data_size": 7936 00:46:51.374 }, 00:46:51.374 { 00:46:51.374 "name": "BaseBdev2", 00:46:51.374 "uuid": "9707b0db-71f1-51e3-9d02-6a864ac36a93", 00:46:51.374 "is_configured": true, 00:46:51.374 "data_offset": 256, 00:46:51.374 "data_size": 7936 00:46:51.374 } 00:46:51.374 ] 00:46:51.374 }' 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:46:51.374 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88346 00:46:51.375 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88346 ']' 00:46:51.375 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88346 00:46:51.375 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:46:51.375 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:51.375 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88346 00:46:51.634 killing process with pid 88346 00:46:51.634 Received shutdown signal, test time was about 60.000000 seconds 00:46:51.634 00:46:51.634 Latency(us) 00:46:51.634 [2024-12-09T05:37:38.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.634 [2024-12-09T05:37:38.606Z] =================================================================================================================== 00:46:51.634 [2024-12-09T05:37:38.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:51.634 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:51.634 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:51.634 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88346' 00:46:51.634 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88346 00:46:51.634 [2024-12-09 05:37:38.360119] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:46:51.634 05:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88346 00:46:51.634 [2024-12-09 05:37:38.360294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:51.634 [2024-12-09 05:37:38.360421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:51.634 [2024-12-09 05:37:38.360441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:46:51.892 [2024-12-09 05:37:38.687116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:46:53.266 05:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:46:53.266 00:46:53.266 real 0m22.503s 00:46:53.266 user 0m30.367s 00:46:53.266 sys 0m2.891s 00:46:53.266 05:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:53.266 05:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:46:53.266 ************************************ 00:46:53.266 END TEST raid_rebuild_test_sb_md_separate 00:46:53.266 ************************************ 00:46:53.266 05:37:40 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:46:53.266 05:37:40 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:46:53.266 05:37:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:46:53.266 05:37:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:53.266 05:37:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:46:53.266 ************************************ 00:46:53.266 START TEST raid_state_function_test_sb_md_interleaved 00:46:53.266 ************************************ 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89058 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89058' 00:46:53.266 Process raid pid: 89058 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89058 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89058 ']' 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:53.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:53.266 05:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:53.266 [2024-12-09 05:37:40.185412] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:53.266 [2024-12-09 05:37:40.185610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:53.525 [2024-12-09 05:37:40.386414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.783 [2024-12-09 05:37:40.555944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:54.041 [2024-12-09 05:37:40.791937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:54.041 [2024-12-09 05:37:40.792008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:54.300 [2024-12-09 05:37:41.220353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:46:54.300 [2024-12-09 05:37:41.220454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:46:54.300 [2024-12-09 05:37:41.220469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:46:54.300 [2024-12-09 05:37:41.220484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:54.300 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:54.559 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:54.559 "name": "Existed_Raid", 00:46:54.559 "uuid": "cd8128c1-5e1a-4336-bf90-890323b81cd2", 00:46:54.559 "strip_size_kb": 0, 00:46:54.559 "state": "configuring", 00:46:54.559 "raid_level": "raid1", 00:46:54.559 "superblock": true, 00:46:54.559 "num_base_bdevs": 2, 00:46:54.559 "num_base_bdevs_discovered": 0, 00:46:54.559 "num_base_bdevs_operational": 2, 00:46:54.559 "base_bdevs_list": [ 00:46:54.559 { 00:46:54.559 "name": "BaseBdev1", 00:46:54.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:54.559 "is_configured": false, 00:46:54.559 "data_offset": 0, 00:46:54.559 "data_size": 0 00:46:54.559 }, 00:46:54.559 { 00:46:54.559 "name": "BaseBdev2", 00:46:54.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:54.559 "is_configured": false, 00:46:54.559 "data_offset": 0, 00:46:54.559 "data_size": 0 00:46:54.559 } 00:46:54.559 ] 00:46:54.559 }' 00:46:54.559 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:54.559 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:54.818 [2024-12-09 05:37:41.768491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:46:54.818 [2024-12-09 05:37:41.768558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:54.818 [2024-12-09 05:37:41.776442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:46:54.818 [2024-12-09 05:37:41.776504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:46:54.818 [2024-12-09 05:37:41.776518] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:46:54.818 [2024-12-09 05:37:41.776537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.818 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.075 [2024-12-09 05:37:41.820620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:55.075 BaseBdev1 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:46:55.075 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.076 [ 00:46:55.076 { 00:46:55.076 "name": "BaseBdev1", 00:46:55.076 "aliases": [ 00:46:55.076 "b2186e9c-bfb9-452c-b7ba-1f07c94fef82" 00:46:55.076 ], 00:46:55.076 "product_name": "Malloc disk", 00:46:55.076 "block_size": 4128, 00:46:55.076 "num_blocks": 8192, 00:46:55.076 "uuid": "b2186e9c-bfb9-452c-b7ba-1f07c94fef82", 00:46:55.076 "md_size": 32, 00:46:55.076 "md_interleave": true, 00:46:55.076 "dif_type": 0, 00:46:55.076 "assigned_rate_limits": { 00:46:55.076 "rw_ios_per_sec": 0, 00:46:55.076 "rw_mbytes_per_sec": 0, 00:46:55.076 "r_mbytes_per_sec": 0, 00:46:55.076 "w_mbytes_per_sec": 0 00:46:55.076 }, 00:46:55.076 "claimed": true, 00:46:55.076 "claim_type": "exclusive_write", 00:46:55.076 "zoned": false, 00:46:55.076 "supported_io_types": { 00:46:55.076 "read": true, 00:46:55.076 "write": true, 00:46:55.076 "unmap": true, 00:46:55.076 "flush": true, 00:46:55.076 "reset": true, 00:46:55.076 "nvme_admin": false, 00:46:55.076 "nvme_io": false, 00:46:55.076 "nvme_io_md": false, 00:46:55.076 "write_zeroes": true, 00:46:55.076 "zcopy": true, 00:46:55.076 "get_zone_info": false, 00:46:55.076 "zone_management": false, 00:46:55.076 "zone_append": false, 00:46:55.076 "compare": false, 00:46:55.076 "compare_and_write": false, 00:46:55.076 "abort": true, 00:46:55.076 "seek_hole": false, 00:46:55.076 "seek_data": false, 00:46:55.076 "copy": true, 00:46:55.076 "nvme_iov_md": false 00:46:55.076 }, 00:46:55.076 "memory_domains": [ 00:46:55.076 { 00:46:55.076 "dma_device_id": "system", 00:46:55.076 "dma_device_type": 1 00:46:55.076 }, 00:46:55.076 { 00:46:55.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:55.076 "dma_device_type": 2 00:46:55.076 } 00:46:55.076 ], 00:46:55.076 "driver_specific": {} 00:46:55.076 } 00:46:55.076 ] 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:55.076 "name": "Existed_Raid", 00:46:55.076 "uuid": "f20ce0bf-3063-46b7-abc3-94d67b2a939d", 00:46:55.076 "strip_size_kb": 0, 00:46:55.076 "state": "configuring", 00:46:55.076 "raid_level": "raid1", 00:46:55.076 "superblock": true, 00:46:55.076 "num_base_bdevs": 2, 00:46:55.076 "num_base_bdevs_discovered": 1, 00:46:55.076 "num_base_bdevs_operational": 2, 00:46:55.076 "base_bdevs_list": [ 00:46:55.076 { 00:46:55.076 "name": "BaseBdev1", 00:46:55.076 "uuid": "b2186e9c-bfb9-452c-b7ba-1f07c94fef82", 00:46:55.076 "is_configured": true, 00:46:55.076 "data_offset": 256, 00:46:55.076 "data_size": 7936 00:46:55.076 }, 00:46:55.076 { 00:46:55.076 "name": "BaseBdev2", 00:46:55.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:55.076 "is_configured": false, 00:46:55.076 "data_offset": 0, 00:46:55.076 "data_size": 0 00:46:55.076 } 00:46:55.076 ] 00:46:55.076 }' 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:55.076 05:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.685 [2024-12-09 05:37:42.412954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:46:55.685 [2024-12-09 05:37:42.413041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.685 [2024-12-09 05:37:42.421008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:55.685 [2024-12-09 05:37:42.423743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:46:55.685 [2024-12-09 05:37:42.423827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:55.685 "name": "Existed_Raid", 00:46:55.685 "uuid": "b1e2a1a1-d7df-489d-8b10-b28558b5a05f", 00:46:55.685 "strip_size_kb": 0, 00:46:55.685 "state": "configuring", 00:46:55.685 "raid_level": "raid1", 00:46:55.685 "superblock": true, 00:46:55.685 "num_base_bdevs": 2, 00:46:55.685 "num_base_bdevs_discovered": 1, 00:46:55.685 "num_base_bdevs_operational": 2, 00:46:55.685 "base_bdevs_list": [ 00:46:55.685 { 00:46:55.685 "name": "BaseBdev1", 00:46:55.685 "uuid": "b2186e9c-bfb9-452c-b7ba-1f07c94fef82", 00:46:55.685 "is_configured": true, 00:46:55.685 "data_offset": 256, 00:46:55.685 "data_size": 7936 00:46:55.685 }, 00:46:55.685 { 00:46:55.685 "name": "BaseBdev2", 00:46:55.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:55.685 "is_configured": false, 00:46:55.685 "data_offset": 0, 00:46:55.685 "data_size": 0 00:46:55.685 } 00:46:55.685 ] 00:46:55.685 }' 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:55.685 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.252 [2024-12-09 05:37:42.980674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:46:56.252 [2024-12-09 05:37:42.981007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:46:56.252 [2024-12-09 05:37:42.981028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:46:56.252 [2024-12-09 05:37:42.981134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:46:56.252 [2024-12-09 05:37:42.981241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:46:56.252 [2024-12-09 05:37:42.981260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:46:56.252 [2024-12-09 05:37:42.981361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:56.252 BaseBdev2 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.252 05:37:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.252 [ 00:46:56.252 { 00:46:56.252 "name": "BaseBdev2", 00:46:56.252 "aliases": [ 00:46:56.252 "25d54fe0-7e29-45a3-b96f-02d1af2dd53c" 00:46:56.252 ], 00:46:56.252 "product_name": "Malloc disk", 00:46:56.252 "block_size": 4128, 00:46:56.252 "num_blocks": 8192, 00:46:56.252 "uuid": "25d54fe0-7e29-45a3-b96f-02d1af2dd53c", 00:46:56.252 "md_size": 32, 00:46:56.252 "md_interleave": true, 00:46:56.252 "dif_type": 0, 00:46:56.252 "assigned_rate_limits": { 00:46:56.252 "rw_ios_per_sec": 0, 00:46:56.252 "rw_mbytes_per_sec": 0, 00:46:56.252 "r_mbytes_per_sec": 0, 00:46:56.252 "w_mbytes_per_sec": 0 00:46:56.252 }, 00:46:56.252 "claimed": true, 00:46:56.252 "claim_type": "exclusive_write", 00:46:56.252 "zoned": false, 00:46:56.252 "supported_io_types": { 00:46:56.252 "read": true, 00:46:56.252 "write": true, 00:46:56.252 "unmap": true, 00:46:56.252 "flush": true, 00:46:56.252 "reset": true, 00:46:56.252 "nvme_admin": false, 00:46:56.252 "nvme_io": false, 00:46:56.252 "nvme_io_md": false, 00:46:56.252 "write_zeroes": true, 00:46:56.252 "zcopy": true, 00:46:56.252 "get_zone_info": false, 00:46:56.252 "zone_management": false, 00:46:56.252 "zone_append": false, 00:46:56.252 "compare": false, 00:46:56.252 "compare_and_write": false, 00:46:56.252 "abort": true, 00:46:56.252 "seek_hole": false, 00:46:56.252 "seek_data": false, 00:46:56.252 "copy": true, 00:46:56.252 "nvme_iov_md": false 00:46:56.252 }, 00:46:56.252 "memory_domains": [ 00:46:56.252 { 00:46:56.252 "dma_device_id": "system", 00:46:56.252 "dma_device_type": 1 00:46:56.252 }, 00:46:56.252 { 00:46:56.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:56.252 "dma_device_type": 2 00:46:56.252 } 00:46:56.252 ], 00:46:56.252 "driver_specific": {} 00:46:56.252 } 00:46:56.252 ] 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:56.252 "name": "Existed_Raid", 00:46:56.252 "uuid": "b1e2a1a1-d7df-489d-8b10-b28558b5a05f", 00:46:56.252 "strip_size_kb": 0, 00:46:56.252 "state": "online", 00:46:56.252 "raid_level": "raid1", 00:46:56.252 "superblock": true, 00:46:56.252 "num_base_bdevs": 2, 00:46:56.252 "num_base_bdevs_discovered": 2, 00:46:56.252 "num_base_bdevs_operational": 2, 00:46:56.252 "base_bdevs_list": [ 00:46:56.252 { 00:46:56.252 "name": "BaseBdev1", 00:46:56.252 "uuid": "b2186e9c-bfb9-452c-b7ba-1f07c94fef82", 00:46:56.252 "is_configured": true, 00:46:56.252 "data_offset": 256, 00:46:56.252 "data_size": 7936 00:46:56.252 }, 00:46:56.252 { 00:46:56.252 "name": "BaseBdev2", 00:46:56.252 "uuid": "25d54fe0-7e29-45a3-b96f-02d1af2dd53c", 00:46:56.252 "is_configured": true, 00:46:56.252 "data_offset": 256, 00:46:56.252 "data_size": 7936 00:46:56.252 } 00:46:56.252 ] 00:46:56.252 }' 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:56.252 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:46:56.821 [2024-12-09 05:37:43.541367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:46:56.821 "name": "Existed_Raid", 00:46:56.821 "aliases": [ 00:46:56.821 "b1e2a1a1-d7df-489d-8b10-b28558b5a05f" 00:46:56.821 ], 00:46:56.821 "product_name": "Raid Volume", 00:46:56.821 "block_size": 4128, 00:46:56.821 "num_blocks": 7936, 00:46:56.821 "uuid": "b1e2a1a1-d7df-489d-8b10-b28558b5a05f", 00:46:56.821 "md_size": 32, 00:46:56.821 "md_interleave": true, 00:46:56.821 "dif_type": 0, 00:46:56.821 "assigned_rate_limits": { 00:46:56.821 "rw_ios_per_sec": 0, 00:46:56.821 "rw_mbytes_per_sec": 0, 00:46:56.821 "r_mbytes_per_sec": 0, 00:46:56.821 "w_mbytes_per_sec": 0 00:46:56.821 }, 00:46:56.821 "claimed": false, 00:46:56.821 "zoned": false, 00:46:56.821 "supported_io_types": { 00:46:56.821 "read": true, 00:46:56.821 "write": true, 00:46:56.821 "unmap": false, 00:46:56.821 "flush": false, 00:46:56.821 "reset": true, 00:46:56.821 "nvme_admin": false, 00:46:56.821 "nvme_io": false, 00:46:56.821 "nvme_io_md": false, 00:46:56.821 "write_zeroes": true, 00:46:56.821 "zcopy": false, 00:46:56.821 "get_zone_info": false, 00:46:56.821 "zone_management": false, 00:46:56.821 "zone_append": false, 00:46:56.821 "compare": false, 00:46:56.821 "compare_and_write": false, 00:46:56.821 "abort": false, 00:46:56.821 "seek_hole": false, 00:46:56.821 "seek_data": false, 00:46:56.821 "copy": false, 00:46:56.821 "nvme_iov_md": false 00:46:56.821 }, 00:46:56.821 "memory_domains": [ 00:46:56.821 { 00:46:56.821 "dma_device_id": "system", 00:46:56.821 "dma_device_type": 1 00:46:56.821 }, 00:46:56.821 { 00:46:56.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:56.821 "dma_device_type": 2 00:46:56.821 }, 00:46:56.821 { 00:46:56.821 "dma_device_id": "system", 00:46:56.821 "dma_device_type": 1 00:46:56.821 }, 00:46:56.821 { 00:46:56.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:56.821 "dma_device_type": 2 00:46:56.821 } 00:46:56.821 ], 00:46:56.821 "driver_specific": { 00:46:56.821 "raid": { 00:46:56.821 "uuid": "b1e2a1a1-d7df-489d-8b10-b28558b5a05f", 00:46:56.821 "strip_size_kb": 0, 00:46:56.821 "state": "online", 00:46:56.821 "raid_level": "raid1", 00:46:56.821 "superblock": true, 00:46:56.821 "num_base_bdevs": 2, 00:46:56.821 "num_base_bdevs_discovered": 2, 00:46:56.821 "num_base_bdevs_operational": 2, 00:46:56.821 "base_bdevs_list": [ 00:46:56.821 { 00:46:56.821 "name": "BaseBdev1", 00:46:56.821 "uuid": "b2186e9c-bfb9-452c-b7ba-1f07c94fef82", 00:46:56.821 "is_configured": true, 00:46:56.821 "data_offset": 256, 00:46:56.821 "data_size": 7936 00:46:56.821 }, 00:46:56.821 { 00:46:56.821 "name": "BaseBdev2", 00:46:56.821 "uuid": "25d54fe0-7e29-45a3-b96f-02d1af2dd53c", 00:46:56.821 "is_configured": true, 00:46:56.821 "data_offset": 256, 00:46:56.821 "data_size": 7936 00:46:56.821 } 00:46:56.821 ] 00:46:56.821 } 00:46:56.821 } 00:46:56.821 }' 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:46:56.821 BaseBdev2' 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:56.821 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:46:56.822 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:57.081 [2024-12-09 05:37:43.817156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:46:57.081 "name": "Existed_Raid", 00:46:57.081 "uuid": "b1e2a1a1-d7df-489d-8b10-b28558b5a05f", 00:46:57.081 "strip_size_kb": 0, 00:46:57.081 "state": "online", 00:46:57.081 "raid_level": "raid1", 00:46:57.081 "superblock": true, 00:46:57.081 "num_base_bdevs": 2, 00:46:57.081 "num_base_bdevs_discovered": 1, 00:46:57.081 "num_base_bdevs_operational": 1, 00:46:57.081 "base_bdevs_list": [ 00:46:57.081 { 00:46:57.081 "name": null, 00:46:57.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:46:57.081 "is_configured": false, 00:46:57.081 "data_offset": 0, 00:46:57.081 "data_size": 7936 00:46:57.081 }, 00:46:57.081 { 00:46:57.081 "name": "BaseBdev2", 00:46:57.081 "uuid": "25d54fe0-7e29-45a3-b96f-02d1af2dd53c", 00:46:57.081 "is_configured": true, 00:46:57.081 "data_offset": 256, 00:46:57.081 "data_size": 7936 00:46:57.081 } 00:46:57.081 ] 00:46:57.081 }' 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:46:57.081 05:37:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.650 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:57.650 [2024-12-09 05:37:44.536252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:46:57.650 [2024-12-09 05:37:44.536423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:57.910 [2024-12-09 05:37:44.631169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:57.910 [2024-12-09 05:37:44.631262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:57.910 [2024-12-09 05:37:44.631288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:46:57.910 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89058 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89058 ']' 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89058 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89058 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:57.911 killing process with pid 89058 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89058' 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89058 00:46:57.911 05:37:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89058 00:46:57.911 [2024-12-09 05:37:44.733656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:46:57.911 [2024-12-09 05:37:44.750051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:46:59.287 05:37:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:46:59.287 00:46:59.287 real 0m5.993s 00:46:59.287 user 0m8.871s 00:46:59.287 sys 0m0.962s 00:46:59.287 05:37:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:59.287 05:37:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:59.287 ************************************ 00:46:59.287 END TEST raid_state_function_test_sb_md_interleaved 00:46:59.287 ************************************ 00:46:59.287 05:37:46 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:46:59.287 05:37:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:59.287 05:37:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:59.287 05:37:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:46:59.287 ************************************ 00:46:59.287 START TEST raid_superblock_test_md_interleaved 00:46:59.287 ************************************ 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89312 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89312 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89312 ']' 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:59.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:59.288 05:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:46:59.288 [2024-12-09 05:37:46.251140] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:46:59.288 [2024-12-09 05:37:46.251369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89312 ] 00:46:59.546 [2024-12-09 05:37:46.448357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.805 [2024-12-09 05:37:46.598565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:00.064 [2024-12-09 05:37:46.828690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:47:00.064 [2024-12-09 05:37:46.828811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.323 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:00.582 malloc1 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:00.582 [2024-12-09 05:37:47.323157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:47:00.582 [2024-12-09 05:37:47.323233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:00.582 [2024-12-09 05:37:47.323266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:47:00.582 [2024-12-09 05:37:47.323298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:00.582 [2024-12-09 05:37:47.326396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:00.582 [2024-12-09 05:37:47.326447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:47:00.582 pt1 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:00.582 malloc2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:00.582 [2024-12-09 05:37:47.383520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:47:00.582 [2024-12-09 05:37:47.383618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:00.582 [2024-12-09 05:37:47.383661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:47:00.582 [2024-12-09 05:37:47.383690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:00.582 [2024-12-09 05:37:47.386711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:00.582 [2024-12-09 05:37:47.386750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:47:00.582 pt2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:00.582 [2024-12-09 05:37:47.395593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:47:00.582 [2024-12-09 05:37:47.398564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:47:00.582 [2024-12-09 05:37:47.398831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:47:00.582 [2024-12-09 05:37:47.398866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:47:00.582 [2024-12-09 05:37:47.398971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:47:00.582 [2024-12-09 05:37:47.399079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:47:00.582 [2024-12-09 05:37:47.399108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:47:00.582 [2024-12-09 05:37:47.399204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.582 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:00.582 "name": "raid_bdev1", 00:47:00.582 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:00.582 "strip_size_kb": 0, 00:47:00.582 "state": "online", 00:47:00.582 "raid_level": "raid1", 00:47:00.582 "superblock": true, 00:47:00.582 "num_base_bdevs": 2, 00:47:00.582 "num_base_bdevs_discovered": 2, 00:47:00.582 "num_base_bdevs_operational": 2, 00:47:00.582 "base_bdevs_list": [ 00:47:00.582 { 00:47:00.582 "name": "pt1", 00:47:00.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:47:00.583 "is_configured": true, 00:47:00.583 "data_offset": 256, 00:47:00.583 "data_size": 7936 00:47:00.583 }, 00:47:00.583 { 00:47:00.583 "name": "pt2", 00:47:00.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:00.583 "is_configured": true, 00:47:00.583 "data_offset": 256, 00:47:00.583 "data_size": 7936 00:47:00.583 } 00:47:00.583 ] 00:47:00.583 }' 00:47:00.583 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:00.583 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:47:01.150 [2024-12-09 05:37:47.928416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:47:01.150 "name": "raid_bdev1", 00:47:01.150 "aliases": [ 00:47:01.150 "0d66256c-bc08-4041-9297-04ad95f76408" 00:47:01.150 ], 00:47:01.150 "product_name": "Raid Volume", 00:47:01.150 "block_size": 4128, 00:47:01.150 "num_blocks": 7936, 00:47:01.150 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:01.150 "md_size": 32, 00:47:01.150 "md_interleave": true, 00:47:01.150 "dif_type": 0, 00:47:01.150 "assigned_rate_limits": { 00:47:01.150 "rw_ios_per_sec": 0, 00:47:01.150 "rw_mbytes_per_sec": 0, 00:47:01.150 "r_mbytes_per_sec": 0, 00:47:01.150 "w_mbytes_per_sec": 0 00:47:01.150 }, 00:47:01.150 "claimed": false, 00:47:01.150 "zoned": false, 00:47:01.150 "supported_io_types": { 00:47:01.150 "read": true, 00:47:01.150 "write": true, 00:47:01.150 "unmap": false, 00:47:01.150 "flush": false, 00:47:01.150 "reset": true, 00:47:01.150 "nvme_admin": false, 00:47:01.150 "nvme_io": false, 00:47:01.150 "nvme_io_md": false, 00:47:01.150 "write_zeroes": true, 00:47:01.150 "zcopy": false, 00:47:01.150 "get_zone_info": false, 00:47:01.150 "zone_management": false, 00:47:01.150 "zone_append": false, 00:47:01.150 "compare": false, 00:47:01.150 "compare_and_write": false, 00:47:01.150 "abort": false, 00:47:01.150 "seek_hole": false, 00:47:01.150 "seek_data": false, 00:47:01.150 "copy": false, 00:47:01.150 "nvme_iov_md": false 00:47:01.150 }, 00:47:01.150 "memory_domains": [ 00:47:01.150 { 00:47:01.150 "dma_device_id": "system", 00:47:01.150 "dma_device_type": 1 00:47:01.150 }, 00:47:01.150 { 00:47:01.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:47:01.150 "dma_device_type": 2 00:47:01.150 }, 00:47:01.150 { 00:47:01.150 "dma_device_id": "system", 00:47:01.150 "dma_device_type": 1 00:47:01.150 }, 00:47:01.150 { 00:47:01.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:47:01.150 "dma_device_type": 2 00:47:01.150 } 00:47:01.150 ], 00:47:01.150 "driver_specific": { 00:47:01.150 "raid": { 00:47:01.150 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:01.150 "strip_size_kb": 0, 00:47:01.150 "state": "online", 00:47:01.150 "raid_level": "raid1", 00:47:01.150 "superblock": true, 00:47:01.150 "num_base_bdevs": 2, 00:47:01.150 "num_base_bdevs_discovered": 2, 00:47:01.150 "num_base_bdevs_operational": 2, 00:47:01.150 "base_bdevs_list": [ 00:47:01.150 { 00:47:01.150 "name": "pt1", 00:47:01.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:47:01.150 "is_configured": true, 00:47:01.150 "data_offset": 256, 00:47:01.150 "data_size": 7936 00:47:01.150 }, 00:47:01.150 { 00:47:01.150 "name": "pt2", 00:47:01.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:01.150 "is_configured": true, 00:47:01.150 "data_offset": 256, 00:47:01.150 "data_size": 7936 00:47:01.150 } 00:47:01.150 ] 00:47:01.150 } 00:47:01.150 } 00:47:01.150 }' 00:47:01.150 05:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:47:01.150 pt2' 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:47:01.150 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:47:01.409 [2024-12-09 05:37:48.196434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0d66256c-bc08-4041-9297-04ad95f76408 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 0d66256c-bc08-4041-9297-04ad95f76408 ']' 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.409 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.409 [2024-12-09 05:37:48.251996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:47:01.409 [2024-12-09 05:37:48.252035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:47:01.409 [2024-12-09 05:37:48.252173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:47:01.409 [2024-12-09 05:37:48.252286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:47:01.409 [2024-12-09 05:37:48.252305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:47:01.410 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.669 [2024-12-09 05:37:48.388179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:47:01.669 [2024-12-09 05:37:48.391328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:47:01.669 [2024-12-09 05:37:48.391482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:47:01.669 [2024-12-09 05:37:48.391605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:47:01.669 [2024-12-09 05:37:48.391631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:47:01.669 [2024-12-09 05:37:48.391646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:47:01.669 request: 00:47:01.669 { 00:47:01.669 "name": "raid_bdev1", 00:47:01.669 "raid_level": "raid1", 00:47:01.669 "base_bdevs": [ 00:47:01.669 "malloc1", 00:47:01.669 "malloc2" 00:47:01.669 ], 00:47:01.669 "superblock": false, 00:47:01.669 "method": "bdev_raid_create", 00:47:01.669 "req_id": 1 00:47:01.669 } 00:47:01.669 Got JSON-RPC error response 00:47:01.669 response: 00:47:01.669 { 00:47:01.669 "code": -17, 00:47:01.669 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:47:01.669 } 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.669 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.670 [2024-12-09 05:37:48.456237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:47:01.670 [2024-12-09 05:37:48.456306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:01.670 [2024-12-09 05:37:48.456330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:47:01.670 [2024-12-09 05:37:48.456346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:01.670 [2024-12-09 05:37:48.459394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:01.670 [2024-12-09 05:37:48.459463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:47:01.670 [2024-12-09 05:37:48.459551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:47:01.670 [2024-12-09 05:37:48.459620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:47:01.670 pt1 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:01.670 "name": "raid_bdev1", 00:47:01.670 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:01.670 "strip_size_kb": 0, 00:47:01.670 "state": "configuring", 00:47:01.670 "raid_level": "raid1", 00:47:01.670 "superblock": true, 00:47:01.670 "num_base_bdevs": 2, 00:47:01.670 "num_base_bdevs_discovered": 1, 00:47:01.670 "num_base_bdevs_operational": 2, 00:47:01.670 "base_bdevs_list": [ 00:47:01.670 { 00:47:01.670 "name": "pt1", 00:47:01.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:47:01.670 "is_configured": true, 00:47:01.670 "data_offset": 256, 00:47:01.670 "data_size": 7936 00:47:01.670 }, 00:47:01.670 { 00:47:01.670 "name": null, 00:47:01.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:01.670 "is_configured": false, 00:47:01.670 "data_offset": 256, 00:47:01.670 "data_size": 7936 00:47:01.670 } 00:47:01.670 ] 00:47:01.670 }' 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:01.670 05:37:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.254 [2024-12-09 05:37:49.016612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:47:02.254 [2024-12-09 05:37:49.016738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:02.254 [2024-12-09 05:37:49.016786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:47:02.254 [2024-12-09 05:37:49.016820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:02.254 [2024-12-09 05:37:49.017136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:02.254 [2024-12-09 05:37:49.017210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:47:02.254 [2024-12-09 05:37:49.017282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:47:02.254 [2024-12-09 05:37:49.017339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:47:02.254 [2024-12-09 05:37:49.017522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:47:02.254 [2024-12-09 05:37:49.017542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:47:02.254 [2024-12-09 05:37:49.017627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:47:02.254 [2024-12-09 05:37:49.017728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:47:02.254 [2024-12-09 05:37:49.017741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:47:02.254 [2024-12-09 05:37:49.017873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:02.254 pt2 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:02.254 "name": "raid_bdev1", 00:47:02.254 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:02.254 "strip_size_kb": 0, 00:47:02.254 "state": "online", 00:47:02.254 "raid_level": "raid1", 00:47:02.254 "superblock": true, 00:47:02.254 "num_base_bdevs": 2, 00:47:02.254 "num_base_bdevs_discovered": 2, 00:47:02.254 "num_base_bdevs_operational": 2, 00:47:02.254 "base_bdevs_list": [ 00:47:02.254 { 00:47:02.254 "name": "pt1", 00:47:02.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:47:02.254 "is_configured": true, 00:47:02.254 "data_offset": 256, 00:47:02.254 "data_size": 7936 00:47:02.254 }, 00:47:02.254 { 00:47:02.254 "name": "pt2", 00:47:02.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:02.254 "is_configured": true, 00:47:02.254 "data_offset": 256, 00:47:02.254 "data_size": 7936 00:47:02.254 } 00:47:02.254 ] 00:47:02.254 }' 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:02.254 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.821 [2024-12-09 05:37:49.561357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:02.821 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:47:02.822 "name": "raid_bdev1", 00:47:02.822 "aliases": [ 00:47:02.822 "0d66256c-bc08-4041-9297-04ad95f76408" 00:47:02.822 ], 00:47:02.822 "product_name": "Raid Volume", 00:47:02.822 "block_size": 4128, 00:47:02.822 "num_blocks": 7936, 00:47:02.822 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:02.822 "md_size": 32, 00:47:02.822 "md_interleave": true, 00:47:02.822 "dif_type": 0, 00:47:02.822 "assigned_rate_limits": { 00:47:02.822 "rw_ios_per_sec": 0, 00:47:02.822 "rw_mbytes_per_sec": 0, 00:47:02.822 "r_mbytes_per_sec": 0, 00:47:02.822 "w_mbytes_per_sec": 0 00:47:02.822 }, 00:47:02.822 "claimed": false, 00:47:02.822 "zoned": false, 00:47:02.822 "supported_io_types": { 00:47:02.822 "read": true, 00:47:02.822 "write": true, 00:47:02.822 "unmap": false, 00:47:02.822 "flush": false, 00:47:02.822 "reset": true, 00:47:02.822 "nvme_admin": false, 00:47:02.822 "nvme_io": false, 00:47:02.822 "nvme_io_md": false, 00:47:02.822 "write_zeroes": true, 00:47:02.822 "zcopy": false, 00:47:02.822 "get_zone_info": false, 00:47:02.822 "zone_management": false, 00:47:02.822 "zone_append": false, 00:47:02.822 "compare": false, 00:47:02.822 "compare_and_write": false, 00:47:02.822 "abort": false, 00:47:02.822 "seek_hole": false, 00:47:02.822 "seek_data": false, 00:47:02.822 "copy": false, 00:47:02.822 "nvme_iov_md": false 00:47:02.822 }, 00:47:02.822 "memory_domains": [ 00:47:02.822 { 00:47:02.822 "dma_device_id": "system", 00:47:02.822 "dma_device_type": 1 00:47:02.822 }, 00:47:02.822 { 00:47:02.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:47:02.822 "dma_device_type": 2 00:47:02.822 }, 00:47:02.822 { 00:47:02.822 "dma_device_id": "system", 00:47:02.822 "dma_device_type": 1 00:47:02.822 }, 00:47:02.822 { 00:47:02.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:47:02.822 "dma_device_type": 2 00:47:02.822 } 00:47:02.822 ], 00:47:02.822 "driver_specific": { 00:47:02.822 "raid": { 00:47:02.822 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:02.822 "strip_size_kb": 0, 00:47:02.822 "state": "online", 00:47:02.822 "raid_level": "raid1", 00:47:02.822 "superblock": true, 00:47:02.822 "num_base_bdevs": 2, 00:47:02.822 "num_base_bdevs_discovered": 2, 00:47:02.822 "num_base_bdevs_operational": 2, 00:47:02.822 "base_bdevs_list": [ 00:47:02.822 { 00:47:02.822 "name": "pt1", 00:47:02.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:47:02.822 "is_configured": true, 00:47:02.822 "data_offset": 256, 00:47:02.822 "data_size": 7936 00:47:02.822 }, 00:47:02.822 { 00:47:02.822 "name": "pt2", 00:47:02.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:02.822 "is_configured": true, 00:47:02.822 "data_offset": 256, 00:47:02.822 "data_size": 7936 00:47:02.822 } 00:47:02.822 ] 00:47:02.822 } 00:47:02.822 } 00:47:02.822 }' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:47:02.822 pt2' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:47:02.822 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.081 [2024-12-09 05:37:49.825464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 0d66256c-bc08-4041-9297-04ad95f76408 '!=' 0d66256c-bc08-4041-9297-04ad95f76408 ']' 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.081 [2024-12-09 05:37:49.877205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:03.081 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:03.082 "name": "raid_bdev1", 00:47:03.082 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:03.082 "strip_size_kb": 0, 00:47:03.082 "state": "online", 00:47:03.082 "raid_level": "raid1", 00:47:03.082 "superblock": true, 00:47:03.082 "num_base_bdevs": 2, 00:47:03.082 "num_base_bdevs_discovered": 1, 00:47:03.082 "num_base_bdevs_operational": 1, 00:47:03.082 "base_bdevs_list": [ 00:47:03.082 { 00:47:03.082 "name": null, 00:47:03.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:03.082 "is_configured": false, 00:47:03.082 "data_offset": 0, 00:47:03.082 "data_size": 7936 00:47:03.082 }, 00:47:03.082 { 00:47:03.082 "name": "pt2", 00:47:03.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:03.082 "is_configured": true, 00:47:03.082 "data_offset": 256, 00:47:03.082 "data_size": 7936 00:47:03.082 } 00:47:03.082 ] 00:47:03.082 }' 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:03.082 05:37:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.650 [2024-12-09 05:37:50.413385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:47:03.650 [2024-12-09 05:37:50.413437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:47:03.650 [2024-12-09 05:37:50.413574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:47:03.650 [2024-12-09 05:37:50.413647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:47:03.650 [2024-12-09 05:37:50.413680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.650 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.650 [2024-12-09 05:37:50.489454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:47:03.650 [2024-12-09 05:37:50.489536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:03.650 [2024-12-09 05:37:50.489561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:47:03.650 [2024-12-09 05:37:50.489578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:03.650 [2024-12-09 05:37:50.493441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:03.650 [2024-12-09 05:37:50.493487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:47:03.650 [2024-12-09 05:37:50.493604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:47:03.650 [2024-12-09 05:37:50.493669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:47:03.650 [2024-12-09 05:37:50.493836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:47:03.650 [2024-12-09 05:37:50.493859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:47:03.650 pt2 00:47:03.650 [2024-12-09 05:37:50.494023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:47:03.650 [2024-12-09 05:37:50.494129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:47:03.651 [2024-12-09 05:37:50.494143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:47:03.651 [2024-12-09 05:37:50.494269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:03.651 "name": "raid_bdev1", 00:47:03.651 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:03.651 "strip_size_kb": 0, 00:47:03.651 "state": "online", 00:47:03.651 "raid_level": "raid1", 00:47:03.651 "superblock": true, 00:47:03.651 "num_base_bdevs": 2, 00:47:03.651 "num_base_bdevs_discovered": 1, 00:47:03.651 "num_base_bdevs_operational": 1, 00:47:03.651 "base_bdevs_list": [ 00:47:03.651 { 00:47:03.651 "name": null, 00:47:03.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:03.651 "is_configured": false, 00:47:03.651 "data_offset": 256, 00:47:03.651 "data_size": 7936 00:47:03.651 }, 00:47:03.651 { 00:47:03.651 "name": "pt2", 00:47:03.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:03.651 "is_configured": true, 00:47:03.651 "data_offset": 256, 00:47:03.651 "data_size": 7936 00:47:03.651 } 00:47:03.651 ] 00:47:03.651 }' 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:03.651 05:37:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.217 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:47:04.217 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:04.217 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.217 [2024-12-09 05:37:51.021995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:47:04.217 [2024-12-09 05:37:51.022067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:47:04.217 [2024-12-09 05:37:51.022230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:47:04.217 [2024-12-09 05:37:51.022330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:47:04.217 [2024-12-09 05:37:51.022345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:47:04.217 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:04.217 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.218 [2024-12-09 05:37:51.085989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:47:04.218 [2024-12-09 05:37:51.086072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:04.218 [2024-12-09 05:37:51.086116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:47:04.218 [2024-12-09 05:37:51.086146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:04.218 [2024-12-09 05:37:51.089396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:04.218 [2024-12-09 05:37:51.089449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:47:04.218 [2024-12-09 05:37:51.089565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:47:04.218 [2024-12-09 05:37:51.089634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:47:04.218 [2024-12-09 05:37:51.089752] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:47:04.218 [2024-12-09 05:37:51.089768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:47:04.218 [2024-12-09 05:37:51.089820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:47:04.218 [2024-12-09 05:37:51.089927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:47:04.218 [2024-12-09 05:37:51.090048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:47:04.218 [2024-12-09 05:37:51.090063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:47:04.218 [2024-12-09 05:37:51.090171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:47:04.218 [2024-12-09 05:37:51.090293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:47:04.218 [2024-12-09 05:37:51.090331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:47:04.218 [2024-12-09 05:37:51.090493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:04.218 pt1 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:04.218 "name": "raid_bdev1", 00:47:04.218 "uuid": "0d66256c-bc08-4041-9297-04ad95f76408", 00:47:04.218 "strip_size_kb": 0, 00:47:04.218 "state": "online", 00:47:04.218 "raid_level": "raid1", 00:47:04.218 "superblock": true, 00:47:04.218 "num_base_bdevs": 2, 00:47:04.218 "num_base_bdevs_discovered": 1, 00:47:04.218 "num_base_bdevs_operational": 1, 00:47:04.218 "base_bdevs_list": [ 00:47:04.218 { 00:47:04.218 "name": null, 00:47:04.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:04.218 "is_configured": false, 00:47:04.218 "data_offset": 256, 00:47:04.218 "data_size": 7936 00:47:04.218 }, 00:47:04.218 { 00:47:04.218 "name": "pt2", 00:47:04.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:47:04.218 "is_configured": true, 00:47:04.218 "data_offset": 256, 00:47:04.218 "data_size": 7936 00:47:04.218 } 00:47:04.218 ] 00:47:04.218 }' 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:04.218 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:47:04.783 [2024-12-09 05:37:51.690860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 0d66256c-bc08-4041-9297-04ad95f76408 '!=' 0d66256c-bc08-4041-9297-04ad95f76408 ']' 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89312 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89312 ']' 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89312 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:04.783 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89312 00:47:05.041 killing process with pid 89312 00:47:05.041 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:05.041 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:05.041 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89312' 00:47:05.041 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89312 00:47:05.041 [2024-12-09 05:37:51.779421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:47:05.041 [2024-12-09 05:37:51.779516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:47:05.041 05:37:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89312 00:47:05.041 [2024-12-09 05:37:51.779605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:47:05.041 [2024-12-09 05:37:51.779657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:47:05.041 [2024-12-09 05:37:51.982712] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:47:06.411 05:37:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:47:06.411 00:47:06.411 real 0m7.121s 00:47:06.411 user 0m11.079s 00:47:06.411 sys 0m1.133s 00:47:06.411 05:37:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:06.411 05:37:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:06.411 ************************************ 00:47:06.411 END TEST raid_superblock_test_md_interleaved 00:47:06.411 ************************************ 00:47:06.411 05:37:53 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:47:06.411 05:37:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:47:06.411 05:37:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:06.411 05:37:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:47:06.411 ************************************ 00:47:06.411 START TEST raid_rebuild_test_sb_md_interleaved 00:47:06.411 ************************************ 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:47:06.411 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89646 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89646 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89646 ']' 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:06.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:06.412 05:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:06.669 [2024-12-09 05:37:53.443413] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:06.669 [2024-12-09 05:37:53.443845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89646 ] 00:47:06.669 I/O size of 3145728 is greater than zero copy threshold (65536). 00:47:06.669 Zero copy mechanism will not be used. 00:47:06.669 [2024-12-09 05:37:53.639097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:06.926 [2024-12-09 05:37:53.789179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:07.184 [2024-12-09 05:37:54.016797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:47:07.184 [2024-12-09 05:37:54.017093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:47:07.441 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:07.441 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:47:07.441 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:47:07.441 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:47:07.441 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.441 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 BaseBdev1_malloc 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 [2024-12-09 05:37:54.449307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:47:07.700 [2024-12-09 05:37:54.449440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:07.700 [2024-12-09 05:37:54.449472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:47:07.700 [2024-12-09 05:37:54.449491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:07.700 [2024-12-09 05:37:54.452474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:07.700 [2024-12-09 05:37:54.452538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:47:07.700 BaseBdev1 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 BaseBdev2_malloc 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 [2024-12-09 05:37:54.505616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:47:07.700 [2024-12-09 05:37:54.505721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:07.700 [2024-12-09 05:37:54.505766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:47:07.700 [2024-12-09 05:37:54.505799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:07.700 [2024-12-09 05:37:54.508531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:07.700 [2024-12-09 05:37:54.508593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:47:07.700 BaseBdev2 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 spare_malloc 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 spare_delay 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 [2024-12-09 05:37:54.590580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:47:07.700 [2024-12-09 05:37:54.590705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:07.700 [2024-12-09 05:37:54.590737] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:47:07.700 [2024-12-09 05:37:54.590755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:07.700 [2024-12-09 05:37:54.593622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:07.700 [2024-12-09 05:37:54.593672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:47:07.700 spare 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.700 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.700 [2024-12-09 05:37:54.598626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:47:07.700 [2024-12-09 05:37:54.601600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:47:07.700 [2024-12-09 05:37:54.602052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:47:07.700 [2024-12-09 05:37:54.602192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:47:07.700 [2024-12-09 05:37:54.602333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:47:07.700 [2024-12-09 05:37:54.602586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:47:07.700 [2024-12-09 05:37:54.602698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:47:07.701 [2024-12-09 05:37:54.603021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:07.701 "name": "raid_bdev1", 00:47:07.701 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:07.701 "strip_size_kb": 0, 00:47:07.701 "state": "online", 00:47:07.701 "raid_level": "raid1", 00:47:07.701 "superblock": true, 00:47:07.701 "num_base_bdevs": 2, 00:47:07.701 "num_base_bdevs_discovered": 2, 00:47:07.701 "num_base_bdevs_operational": 2, 00:47:07.701 "base_bdevs_list": [ 00:47:07.701 { 00:47:07.701 "name": "BaseBdev1", 00:47:07.701 "uuid": "bb61d9fb-f761-5f6b-9a79-e8095981571c", 00:47:07.701 "is_configured": true, 00:47:07.701 "data_offset": 256, 00:47:07.701 "data_size": 7936 00:47:07.701 }, 00:47:07.701 { 00:47:07.701 "name": "BaseBdev2", 00:47:07.701 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:07.701 "is_configured": true, 00:47:07.701 "data_offset": 256, 00:47:07.701 "data_size": 7936 00:47:07.701 } 00:47:07.701 ] 00:47:07.701 }' 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:07.701 05:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.266 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:47:08.266 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:47:08.266 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:08.266 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.267 [2024-12-09 05:37:55.123635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.267 [2024-12-09 05:37:55.223293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.267 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:08.527 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:08.527 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:08.527 "name": "raid_bdev1", 00:47:08.527 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:08.527 "strip_size_kb": 0, 00:47:08.527 "state": "online", 00:47:08.527 "raid_level": "raid1", 00:47:08.527 "superblock": true, 00:47:08.527 "num_base_bdevs": 2, 00:47:08.527 "num_base_bdevs_discovered": 1, 00:47:08.527 "num_base_bdevs_operational": 1, 00:47:08.527 "base_bdevs_list": [ 00:47:08.527 { 00:47:08.527 "name": null, 00:47:08.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:08.527 "is_configured": false, 00:47:08.527 "data_offset": 0, 00:47:08.527 "data_size": 7936 00:47:08.527 }, 00:47:08.527 { 00:47:08.527 "name": "BaseBdev2", 00:47:08.527 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:08.527 "is_configured": true, 00:47:08.527 "data_offset": 256, 00:47:08.527 "data_size": 7936 00:47:08.527 } 00:47:08.527 ] 00:47:08.527 }' 00:47:08.527 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:08.527 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.784 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:47:08.784 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:08.784 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:08.784 [2024-12-09 05:37:55.719491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:47:08.784 [2024-12-09 05:37:55.736109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:47:08.784 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:08.785 05:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:47:08.785 [2024-12-09 05:37:55.738959] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:10.158 "name": "raid_bdev1", 00:47:10.158 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:10.158 "strip_size_kb": 0, 00:47:10.158 "state": "online", 00:47:10.158 "raid_level": "raid1", 00:47:10.158 "superblock": true, 00:47:10.158 "num_base_bdevs": 2, 00:47:10.158 "num_base_bdevs_discovered": 2, 00:47:10.158 "num_base_bdevs_operational": 2, 00:47:10.158 "process": { 00:47:10.158 "type": "rebuild", 00:47:10.158 "target": "spare", 00:47:10.158 "progress": { 00:47:10.158 "blocks": 2560, 00:47:10.158 "percent": 32 00:47:10.158 } 00:47:10.158 }, 00:47:10.158 "base_bdevs_list": [ 00:47:10.158 { 00:47:10.158 "name": "spare", 00:47:10.158 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:10.158 "is_configured": true, 00:47:10.158 "data_offset": 256, 00:47:10.158 "data_size": 7936 00:47:10.158 }, 00:47:10.158 { 00:47:10.158 "name": "BaseBdev2", 00:47:10.158 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:10.158 "is_configured": true, 00:47:10.158 "data_offset": 256, 00:47:10.158 "data_size": 7936 00:47:10.158 } 00:47:10.158 ] 00:47:10.158 }' 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:10.158 [2024-12-09 05:37:56.904637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:10.158 [2024-12-09 05:37:56.948664] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:47:10.158 [2024-12-09 05:37:56.948854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:10.158 [2024-12-09 05:37:56.948896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:10.158 [2024-12-09 05:37:56.948912] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:10.158 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:10.159 05:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:10.159 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.159 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:10.159 "name": "raid_bdev1", 00:47:10.159 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:10.159 "strip_size_kb": 0, 00:47:10.159 "state": "online", 00:47:10.159 "raid_level": "raid1", 00:47:10.159 "superblock": true, 00:47:10.159 "num_base_bdevs": 2, 00:47:10.159 "num_base_bdevs_discovered": 1, 00:47:10.159 "num_base_bdevs_operational": 1, 00:47:10.159 "base_bdevs_list": [ 00:47:10.159 { 00:47:10.159 "name": null, 00:47:10.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:10.159 "is_configured": false, 00:47:10.159 "data_offset": 0, 00:47:10.159 "data_size": 7936 00:47:10.159 }, 00:47:10.159 { 00:47:10.159 "name": "BaseBdev2", 00:47:10.159 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:10.159 "is_configured": true, 00:47:10.159 "data_offset": 256, 00:47:10.159 "data_size": 7936 00:47:10.159 } 00:47:10.159 ] 00:47:10.159 }' 00:47:10.159 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:10.159 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.725 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:10.725 "name": "raid_bdev1", 00:47:10.725 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:10.725 "strip_size_kb": 0, 00:47:10.725 "state": "online", 00:47:10.725 "raid_level": "raid1", 00:47:10.725 "superblock": true, 00:47:10.725 "num_base_bdevs": 2, 00:47:10.726 "num_base_bdevs_discovered": 1, 00:47:10.726 "num_base_bdevs_operational": 1, 00:47:10.726 "base_bdevs_list": [ 00:47:10.726 { 00:47:10.726 "name": null, 00:47:10.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:10.726 "is_configured": false, 00:47:10.726 "data_offset": 0, 00:47:10.726 "data_size": 7936 00:47:10.726 }, 00:47:10.726 { 00:47:10.726 "name": "BaseBdev2", 00:47:10.726 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:10.726 "is_configured": true, 00:47:10.726 "data_offset": 256, 00:47:10.726 "data_size": 7936 00:47:10.726 } 00:47:10.726 ] 00:47:10.726 }' 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:10.726 [2024-12-09 05:37:57.671487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:47:10.726 [2024-12-09 05:37:57.688142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.726 05:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:47:10.726 [2024-12-09 05:37:57.690845] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:12.101 "name": "raid_bdev1", 00:47:12.101 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:12.101 "strip_size_kb": 0, 00:47:12.101 "state": "online", 00:47:12.101 "raid_level": "raid1", 00:47:12.101 "superblock": true, 00:47:12.101 "num_base_bdevs": 2, 00:47:12.101 "num_base_bdevs_discovered": 2, 00:47:12.101 "num_base_bdevs_operational": 2, 00:47:12.101 "process": { 00:47:12.101 "type": "rebuild", 00:47:12.101 "target": "spare", 00:47:12.101 "progress": { 00:47:12.101 "blocks": 2560, 00:47:12.101 "percent": 32 00:47:12.101 } 00:47:12.101 }, 00:47:12.101 "base_bdevs_list": [ 00:47:12.101 { 00:47:12.101 "name": "spare", 00:47:12.101 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:12.101 "is_configured": true, 00:47:12.101 "data_offset": 256, 00:47:12.101 "data_size": 7936 00:47:12.101 }, 00:47:12.101 { 00:47:12.101 "name": "BaseBdev2", 00:47:12.101 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:12.101 "is_configured": true, 00:47:12.101 "data_offset": 256, 00:47:12.101 "data_size": 7936 00:47:12.101 } 00:47:12.101 ] 00:47:12.101 }' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:47:12.101 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=814 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.101 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:12.101 "name": "raid_bdev1", 00:47:12.101 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:12.101 "strip_size_kb": 0, 00:47:12.101 "state": "online", 00:47:12.101 "raid_level": "raid1", 00:47:12.102 "superblock": true, 00:47:12.102 "num_base_bdevs": 2, 00:47:12.102 "num_base_bdevs_discovered": 2, 00:47:12.102 "num_base_bdevs_operational": 2, 00:47:12.102 "process": { 00:47:12.102 "type": "rebuild", 00:47:12.102 "target": "spare", 00:47:12.102 "progress": { 00:47:12.102 "blocks": 2816, 00:47:12.102 "percent": 35 00:47:12.102 } 00:47:12.102 }, 00:47:12.102 "base_bdevs_list": [ 00:47:12.102 { 00:47:12.102 "name": "spare", 00:47:12.102 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:12.102 "is_configured": true, 00:47:12.102 "data_offset": 256, 00:47:12.102 "data_size": 7936 00:47:12.102 }, 00:47:12.102 { 00:47:12.102 "name": "BaseBdev2", 00:47:12.102 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:12.102 "is_configured": true, 00:47:12.102 "data_offset": 256, 00:47:12.102 "data_size": 7936 00:47:12.102 } 00:47:12.102 ] 00:47:12.102 }' 00:47:12.102 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:12.102 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:47:12.102 05:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:12.102 05:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:47:12.102 05:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:13.476 "name": "raid_bdev1", 00:47:13.476 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:13.476 "strip_size_kb": 0, 00:47:13.476 "state": "online", 00:47:13.476 "raid_level": "raid1", 00:47:13.476 "superblock": true, 00:47:13.476 "num_base_bdevs": 2, 00:47:13.476 "num_base_bdevs_discovered": 2, 00:47:13.476 "num_base_bdevs_operational": 2, 00:47:13.476 "process": { 00:47:13.476 "type": "rebuild", 00:47:13.476 "target": "spare", 00:47:13.476 "progress": { 00:47:13.476 "blocks": 5888, 00:47:13.476 "percent": 74 00:47:13.476 } 00:47:13.476 }, 00:47:13.476 "base_bdevs_list": [ 00:47:13.476 { 00:47:13.476 "name": "spare", 00:47:13.476 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:13.476 "is_configured": true, 00:47:13.476 "data_offset": 256, 00:47:13.476 "data_size": 7936 00:47:13.476 }, 00:47:13.476 { 00:47:13.476 "name": "BaseBdev2", 00:47:13.476 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:13.476 "is_configured": true, 00:47:13.476 "data_offset": 256, 00:47:13.476 "data_size": 7936 00:47:13.476 } 00:47:13.476 ] 00:47:13.476 }' 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:47:13.476 05:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:47:14.041 [2024-12-09 05:38:00.815220] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:47:14.041 [2024-12-09 05:38:00.815652] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:47:14.041 [2024-12-09 05:38:00.815880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:14.299 "name": "raid_bdev1", 00:47:14.299 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:14.299 "strip_size_kb": 0, 00:47:14.299 "state": "online", 00:47:14.299 "raid_level": "raid1", 00:47:14.299 "superblock": true, 00:47:14.299 "num_base_bdevs": 2, 00:47:14.299 "num_base_bdevs_discovered": 2, 00:47:14.299 "num_base_bdevs_operational": 2, 00:47:14.299 "base_bdevs_list": [ 00:47:14.299 { 00:47:14.299 "name": "spare", 00:47:14.299 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:14.299 "is_configured": true, 00:47:14.299 "data_offset": 256, 00:47:14.299 "data_size": 7936 00:47:14.299 }, 00:47:14.299 { 00:47:14.299 "name": "BaseBdev2", 00:47:14.299 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:14.299 "is_configured": true, 00:47:14.299 "data_offset": 256, 00:47:14.299 "data_size": 7936 00:47:14.299 } 00:47:14.299 ] 00:47:14.299 }' 00:47:14.299 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:14.557 "name": "raid_bdev1", 00:47:14.557 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:14.557 "strip_size_kb": 0, 00:47:14.557 "state": "online", 00:47:14.557 "raid_level": "raid1", 00:47:14.557 "superblock": true, 00:47:14.557 "num_base_bdevs": 2, 00:47:14.557 "num_base_bdevs_discovered": 2, 00:47:14.557 "num_base_bdevs_operational": 2, 00:47:14.557 "base_bdevs_list": [ 00:47:14.557 { 00:47:14.557 "name": "spare", 00:47:14.557 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:14.557 "is_configured": true, 00:47:14.557 "data_offset": 256, 00:47:14.557 "data_size": 7936 00:47:14.557 }, 00:47:14.557 { 00:47:14.557 "name": "BaseBdev2", 00:47:14.557 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:14.557 "is_configured": true, 00:47:14.557 "data_offset": 256, 00:47:14.557 "data_size": 7936 00:47:14.557 } 00:47:14.557 ] 00:47:14.557 }' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:14.557 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:14.814 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:14.814 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:14.814 "name": "raid_bdev1", 00:47:14.814 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:14.814 "strip_size_kb": 0, 00:47:14.814 "state": "online", 00:47:14.814 "raid_level": "raid1", 00:47:14.814 "superblock": true, 00:47:14.814 "num_base_bdevs": 2, 00:47:14.814 "num_base_bdevs_discovered": 2, 00:47:14.814 "num_base_bdevs_operational": 2, 00:47:14.814 "base_bdevs_list": [ 00:47:14.814 { 00:47:14.814 "name": "spare", 00:47:14.814 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:14.814 "is_configured": true, 00:47:14.814 "data_offset": 256, 00:47:14.814 "data_size": 7936 00:47:14.814 }, 00:47:14.814 { 00:47:14.814 "name": "BaseBdev2", 00:47:14.814 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:14.814 "is_configured": true, 00:47:14.814 "data_offset": 256, 00:47:14.814 "data_size": 7936 00:47:14.814 } 00:47:14.814 ] 00:47:14.814 }' 00:47:14.814 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:14.814 05:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.072 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:47:15.072 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.072 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.329 [2024-12-09 05:38:02.046097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:47:15.329 [2024-12-09 05:38:02.046174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:47:15.329 [2024-12-09 05:38:02.046300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:47:15.329 [2024-12-09 05:38:02.046433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:47:15.329 [2024-12-09 05:38:02.046454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.329 [2024-12-09 05:38:02.114067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:47:15.329 [2024-12-09 05:38:02.114147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:15.329 [2024-12-09 05:38:02.114209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:47:15.329 [2024-12-09 05:38:02.114223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:15.329 [2024-12-09 05:38:02.117207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:15.329 [2024-12-09 05:38:02.117258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:47:15.329 [2024-12-09 05:38:02.117332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:47:15.329 [2024-12-09 05:38:02.117391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:47:15.329 [2024-12-09 05:38:02.117532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:47:15.329 spare 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.329 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.330 [2024-12-09 05:38:02.217656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:47:15.330 [2024-12-09 05:38:02.217728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:47:15.330 [2024-12-09 05:38:02.217909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:47:15.330 [2024-12-09 05:38:02.218103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:47:15.330 [2024-12-09 05:38:02.218130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:47:15.330 [2024-12-09 05:38:02.218286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:15.330 "name": "raid_bdev1", 00:47:15.330 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:15.330 "strip_size_kb": 0, 00:47:15.330 "state": "online", 00:47:15.330 "raid_level": "raid1", 00:47:15.330 "superblock": true, 00:47:15.330 "num_base_bdevs": 2, 00:47:15.330 "num_base_bdevs_discovered": 2, 00:47:15.330 "num_base_bdevs_operational": 2, 00:47:15.330 "base_bdevs_list": [ 00:47:15.330 { 00:47:15.330 "name": "spare", 00:47:15.330 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:15.330 "is_configured": true, 00:47:15.330 "data_offset": 256, 00:47:15.330 "data_size": 7936 00:47:15.330 }, 00:47:15.330 { 00:47:15.330 "name": "BaseBdev2", 00:47:15.330 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:15.330 "is_configured": true, 00:47:15.330 "data_offset": 256, 00:47:15.330 "data_size": 7936 00:47:15.330 } 00:47:15.330 ] 00:47:15.330 }' 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:15.330 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:15.895 "name": "raid_bdev1", 00:47:15.895 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:15.895 "strip_size_kb": 0, 00:47:15.895 "state": "online", 00:47:15.895 "raid_level": "raid1", 00:47:15.895 "superblock": true, 00:47:15.895 "num_base_bdevs": 2, 00:47:15.895 "num_base_bdevs_discovered": 2, 00:47:15.895 "num_base_bdevs_operational": 2, 00:47:15.895 "base_bdevs_list": [ 00:47:15.895 { 00:47:15.895 "name": "spare", 00:47:15.895 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:15.895 "is_configured": true, 00:47:15.895 "data_offset": 256, 00:47:15.895 "data_size": 7936 00:47:15.895 }, 00:47:15.895 { 00:47:15.895 "name": "BaseBdev2", 00:47:15.895 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:15.895 "is_configured": true, 00:47:15.895 "data_offset": 256, 00:47:15.895 "data_size": 7936 00:47:15.895 } 00:47:15.895 ] 00:47:15.895 }' 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:47:15.895 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:16.154 [2024-12-09 05:38:02.938606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:16.154 "name": "raid_bdev1", 00:47:16.154 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:16.154 "strip_size_kb": 0, 00:47:16.154 "state": "online", 00:47:16.154 "raid_level": "raid1", 00:47:16.154 "superblock": true, 00:47:16.154 "num_base_bdevs": 2, 00:47:16.154 "num_base_bdevs_discovered": 1, 00:47:16.154 "num_base_bdevs_operational": 1, 00:47:16.154 "base_bdevs_list": [ 00:47:16.154 { 00:47:16.154 "name": null, 00:47:16.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:16.154 "is_configured": false, 00:47:16.154 "data_offset": 0, 00:47:16.154 "data_size": 7936 00:47:16.154 }, 00:47:16.154 { 00:47:16.154 "name": "BaseBdev2", 00:47:16.154 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:16.154 "is_configured": true, 00:47:16.154 "data_offset": 256, 00:47:16.154 "data_size": 7936 00:47:16.154 } 00:47:16.154 ] 00:47:16.154 }' 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:16.154 05:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:16.719 05:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:47:16.719 05:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.719 05:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:16.719 [2024-12-09 05:38:03.470768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:47:16.719 [2024-12-09 05:38:03.471219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:47:16.719 [2024-12-09 05:38:03.471260] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:47:16.719 [2024-12-09 05:38:03.471305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:47:16.719 [2024-12-09 05:38:03.487636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:47:16.719 05:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.719 05:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:47:16.719 [2024-12-09 05:38:03.490411] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:17.654 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:17.654 "name": "raid_bdev1", 00:47:17.654 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:17.654 "strip_size_kb": 0, 00:47:17.654 "state": "online", 00:47:17.654 "raid_level": "raid1", 00:47:17.654 "superblock": true, 00:47:17.654 "num_base_bdevs": 2, 00:47:17.654 "num_base_bdevs_discovered": 2, 00:47:17.654 "num_base_bdevs_operational": 2, 00:47:17.654 "process": { 00:47:17.654 "type": "rebuild", 00:47:17.654 "target": "spare", 00:47:17.654 "progress": { 00:47:17.654 "blocks": 2560, 00:47:17.654 "percent": 32 00:47:17.654 } 00:47:17.654 }, 00:47:17.654 "base_bdevs_list": [ 00:47:17.655 { 00:47:17.655 "name": "spare", 00:47:17.655 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:17.655 "is_configured": true, 00:47:17.655 "data_offset": 256, 00:47:17.655 "data_size": 7936 00:47:17.655 }, 00:47:17.655 { 00:47:17.655 "name": "BaseBdev2", 00:47:17.655 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:17.655 "is_configured": true, 00:47:17.655 "data_offset": 256, 00:47:17.655 "data_size": 7936 00:47:17.655 } 00:47:17.655 ] 00:47:17.655 }' 00:47:17.655 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:17.655 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:47:17.655 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:17.932 [2024-12-09 05:38:04.656099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:17.932 [2024-12-09 05:38:04.699998] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:47:17.932 [2024-12-09 05:38:04.700077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:17.932 [2024-12-09 05:38:04.700099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:17.932 [2024-12-09 05:38:04.700113] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:17.932 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:17.932 "name": "raid_bdev1", 00:47:17.933 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:17.933 "strip_size_kb": 0, 00:47:17.933 "state": "online", 00:47:17.933 "raid_level": "raid1", 00:47:17.933 "superblock": true, 00:47:17.933 "num_base_bdevs": 2, 00:47:17.933 "num_base_bdevs_discovered": 1, 00:47:17.933 "num_base_bdevs_operational": 1, 00:47:17.933 "base_bdevs_list": [ 00:47:17.933 { 00:47:17.933 "name": null, 00:47:17.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:17.933 "is_configured": false, 00:47:17.933 "data_offset": 0, 00:47:17.933 "data_size": 7936 00:47:17.933 }, 00:47:17.933 { 00:47:17.933 "name": "BaseBdev2", 00:47:17.933 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:17.933 "is_configured": true, 00:47:17.933 "data_offset": 256, 00:47:17.933 "data_size": 7936 00:47:17.933 } 00:47:17.933 ] 00:47:17.933 }' 00:47:17.933 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:17.933 05:38:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:18.501 05:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:47:18.501 05:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:18.501 05:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:18.501 [2024-12-09 05:38:05.239689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:47:18.501 [2024-12-09 05:38:05.239851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:18.501 [2024-12-09 05:38:05.239894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:47:18.501 [2024-12-09 05:38:05.239914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:18.501 [2024-12-09 05:38:05.240231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:18.501 [2024-12-09 05:38:05.240264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:47:18.501 [2024-12-09 05:38:05.240348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:47:18.501 [2024-12-09 05:38:05.240385] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:47:18.501 [2024-12-09 05:38:05.240413] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:47:18.501 [2024-12-09 05:38:05.240465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:47:18.501 [2024-12-09 05:38:05.255726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:47:18.501 spare 00:47:18.501 05:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:18.501 05:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:47:18.501 [2024-12-09 05:38:05.258346] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:19.436 "name": "raid_bdev1", 00:47:19.436 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:19.436 "strip_size_kb": 0, 00:47:19.436 "state": "online", 00:47:19.436 "raid_level": "raid1", 00:47:19.436 "superblock": true, 00:47:19.436 "num_base_bdevs": 2, 00:47:19.436 "num_base_bdevs_discovered": 2, 00:47:19.436 "num_base_bdevs_operational": 2, 00:47:19.436 "process": { 00:47:19.436 "type": "rebuild", 00:47:19.436 "target": "spare", 00:47:19.436 "progress": { 00:47:19.436 "blocks": 2560, 00:47:19.436 "percent": 32 00:47:19.436 } 00:47:19.436 }, 00:47:19.436 "base_bdevs_list": [ 00:47:19.436 { 00:47:19.436 "name": "spare", 00:47:19.436 "uuid": "25cecbb4-ae95-5152-97c0-dc9de21bbc38", 00:47:19.436 "is_configured": true, 00:47:19.436 "data_offset": 256, 00:47:19.436 "data_size": 7936 00:47:19.436 }, 00:47:19.436 { 00:47:19.436 "name": "BaseBdev2", 00:47:19.436 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:19.436 "is_configured": true, 00:47:19.436 "data_offset": 256, 00:47:19.436 "data_size": 7936 00:47:19.436 } 00:47:19.436 ] 00:47:19.436 }' 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:47:19.436 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:19.695 [2024-12-09 05:38:06.419959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:19.695 [2024-12-09 05:38:06.468542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:47:19.695 [2024-12-09 05:38:06.468653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:47:19.695 [2024-12-09 05:38:06.468680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:47:19.695 [2024-12-09 05:38:06.468691] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.695 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:19.695 "name": "raid_bdev1", 00:47:19.695 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:19.695 "strip_size_kb": 0, 00:47:19.695 "state": "online", 00:47:19.695 "raid_level": "raid1", 00:47:19.695 "superblock": true, 00:47:19.695 "num_base_bdevs": 2, 00:47:19.695 "num_base_bdevs_discovered": 1, 00:47:19.695 "num_base_bdevs_operational": 1, 00:47:19.695 "base_bdevs_list": [ 00:47:19.695 { 00:47:19.695 "name": null, 00:47:19.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:19.695 "is_configured": false, 00:47:19.695 "data_offset": 0, 00:47:19.696 "data_size": 7936 00:47:19.696 }, 00:47:19.696 { 00:47:19.696 "name": "BaseBdev2", 00:47:19.696 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:19.696 "is_configured": true, 00:47:19.696 "data_offset": 256, 00:47:19.696 "data_size": 7936 00:47:19.696 } 00:47:19.696 ] 00:47:19.696 }' 00:47:19.696 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:19.696 05:38:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:20.263 "name": "raid_bdev1", 00:47:20.263 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:20.263 "strip_size_kb": 0, 00:47:20.263 "state": "online", 00:47:20.263 "raid_level": "raid1", 00:47:20.263 "superblock": true, 00:47:20.263 "num_base_bdevs": 2, 00:47:20.263 "num_base_bdevs_discovered": 1, 00:47:20.263 "num_base_bdevs_operational": 1, 00:47:20.263 "base_bdevs_list": [ 00:47:20.263 { 00:47:20.263 "name": null, 00:47:20.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:20.263 "is_configured": false, 00:47:20.263 "data_offset": 0, 00:47:20.263 "data_size": 7936 00:47:20.263 }, 00:47:20.263 { 00:47:20.263 "name": "BaseBdev2", 00:47:20.263 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:20.263 "is_configured": true, 00:47:20.263 "data_offset": 256, 00:47:20.263 "data_size": 7936 00:47:20.263 } 00:47:20.263 ] 00:47:20.263 }' 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:20.263 [2024-12-09 05:38:07.210043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:47:20.263 [2024-12-09 05:38:07.210113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:47:20.263 [2024-12-09 05:38:07.210176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:47:20.263 [2024-12-09 05:38:07.210191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:47:20.263 [2024-12-09 05:38:07.210480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:47:20.263 [2024-12-09 05:38:07.210533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:47:20.263 [2024-12-09 05:38:07.210613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:47:20.263 [2024-12-09 05:38:07.210634] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:47:20.263 [2024-12-09 05:38:07.210649] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:47:20.263 [2024-12-09 05:38:07.210664] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:47:20.263 BaseBdev1 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.263 05:38:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:21.645 "name": "raid_bdev1", 00:47:21.645 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:21.645 "strip_size_kb": 0, 00:47:21.645 "state": "online", 00:47:21.645 "raid_level": "raid1", 00:47:21.645 "superblock": true, 00:47:21.645 "num_base_bdevs": 2, 00:47:21.645 "num_base_bdevs_discovered": 1, 00:47:21.645 "num_base_bdevs_operational": 1, 00:47:21.645 "base_bdevs_list": [ 00:47:21.645 { 00:47:21.645 "name": null, 00:47:21.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:21.645 "is_configured": false, 00:47:21.645 "data_offset": 0, 00:47:21.645 "data_size": 7936 00:47:21.645 }, 00:47:21.645 { 00:47:21.645 "name": "BaseBdev2", 00:47:21.645 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:21.645 "is_configured": true, 00:47:21.645 "data_offset": 256, 00:47:21.645 "data_size": 7936 00:47:21.645 } 00:47:21.645 ] 00:47:21.645 }' 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:21.645 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:21.905 "name": "raid_bdev1", 00:47:21.905 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:21.905 "strip_size_kb": 0, 00:47:21.905 "state": "online", 00:47:21.905 "raid_level": "raid1", 00:47:21.905 "superblock": true, 00:47:21.905 "num_base_bdevs": 2, 00:47:21.905 "num_base_bdevs_discovered": 1, 00:47:21.905 "num_base_bdevs_operational": 1, 00:47:21.905 "base_bdevs_list": [ 00:47:21.905 { 00:47:21.905 "name": null, 00:47:21.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:21.905 "is_configured": false, 00:47:21.905 "data_offset": 0, 00:47:21.905 "data_size": 7936 00:47:21.905 }, 00:47:21.905 { 00:47:21.905 "name": "BaseBdev2", 00:47:21.905 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:21.905 "is_configured": true, 00:47:21.905 "data_offset": 256, 00:47:21.905 "data_size": 7936 00:47:21.905 } 00:47:21.905 ] 00:47:21.905 }' 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:47:21.905 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:22.164 [2024-12-09 05:38:08.923178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:47:22.164 [2024-12-09 05:38:08.923471] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:47:22.164 [2024-12-09 05:38:08.923539] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:47:22.164 request: 00:47:22.164 { 00:47:22.164 "base_bdev": "BaseBdev1", 00:47:22.164 "raid_bdev": "raid_bdev1", 00:47:22.164 "method": "bdev_raid_add_base_bdev", 00:47:22.164 "req_id": 1 00:47:22.164 } 00:47:22.164 Got JSON-RPC error response 00:47:22.164 response: 00:47:22.164 { 00:47:22.164 "code": -22, 00:47:22.164 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:47:22.164 } 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:22.164 05:38:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:47:23.099 "name": "raid_bdev1", 00:47:23.099 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:23.099 "strip_size_kb": 0, 00:47:23.099 "state": "online", 00:47:23.099 "raid_level": "raid1", 00:47:23.099 "superblock": true, 00:47:23.099 "num_base_bdevs": 2, 00:47:23.099 "num_base_bdevs_discovered": 1, 00:47:23.099 "num_base_bdevs_operational": 1, 00:47:23.099 "base_bdevs_list": [ 00:47:23.099 { 00:47:23.099 "name": null, 00:47:23.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:23.099 "is_configured": false, 00:47:23.099 "data_offset": 0, 00:47:23.099 "data_size": 7936 00:47:23.099 }, 00:47:23.099 { 00:47:23.099 "name": "BaseBdev2", 00:47:23.099 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:23.099 "is_configured": true, 00:47:23.099 "data_offset": 256, 00:47:23.099 "data_size": 7936 00:47:23.099 } 00:47:23.099 ] 00:47:23.099 }' 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:47:23.099 05:38:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:47:23.667 "name": "raid_bdev1", 00:47:23.667 "uuid": "4e9f6768-1234-4e93-a247-1cbf71e2c348", 00:47:23.667 "strip_size_kb": 0, 00:47:23.667 "state": "online", 00:47:23.667 "raid_level": "raid1", 00:47:23.667 "superblock": true, 00:47:23.667 "num_base_bdevs": 2, 00:47:23.667 "num_base_bdevs_discovered": 1, 00:47:23.667 "num_base_bdevs_operational": 1, 00:47:23.667 "base_bdevs_list": [ 00:47:23.667 { 00:47:23.667 "name": null, 00:47:23.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:47:23.667 "is_configured": false, 00:47:23.667 "data_offset": 0, 00:47:23.667 "data_size": 7936 00:47:23.667 }, 00:47:23.667 { 00:47:23.667 "name": "BaseBdev2", 00:47:23.667 "uuid": "8b04e00a-2fab-5478-908e-eaa56bb08a2d", 00:47:23.667 "is_configured": true, 00:47:23.667 "data_offset": 256, 00:47:23.667 "data_size": 7936 00:47:23.667 } 00:47:23.667 ] 00:47:23.667 }' 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:47:23.667 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89646 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89646 ']' 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89646 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89646 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:23.926 killing process with pid 89646 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89646' 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89646 00:47:23.926 Received shutdown signal, test time was about 60.000000 seconds 00:47:23.926 00:47:23.926 Latency(us) 00:47:23.926 [2024-12-09T05:38:10.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:23.926 [2024-12-09T05:38:10.898Z] =================================================================================================================== 00:47:23.926 [2024-12-09T05:38:10.898Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:23.926 [2024-12-09 05:38:10.682286] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:47:23.926 05:38:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89646 00:47:23.926 [2024-12-09 05:38:10.682451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:47:23.926 [2024-12-09 05:38:10.682545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:47:23.926 [2024-12-09 05:38:10.682567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:47:24.281 [2024-12-09 05:38:10.975634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:47:25.656 05:38:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:47:25.656 00:47:25.656 real 0m18.965s 00:47:25.656 user 0m25.705s 00:47:25.656 sys 0m1.562s 00:47:25.656 05:38:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:25.656 05:38:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:47:25.656 ************************************ 00:47:25.656 END TEST raid_rebuild_test_sb_md_interleaved 00:47:25.656 ************************************ 00:47:25.656 05:38:12 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:47:25.656 05:38:12 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:47:25.656 05:38:12 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89646 ']' 00:47:25.656 05:38:12 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89646 00:47:25.656 05:38:12 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:47:25.656 00:47:25.656 real 13m17.892s 00:47:25.656 user 18m36.668s 00:47:25.656 sys 1m54.349s 00:47:25.656 05:38:12 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:25.656 ************************************ 00:47:25.656 END TEST bdev_raid 00:47:25.656 05:38:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:47:25.656 ************************************ 00:47:25.656 05:38:12 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:47:25.656 05:38:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:25.656 05:38:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:25.656 05:38:12 -- common/autotest_common.sh@10 -- # set +x 00:47:25.656 ************************************ 00:47:25.656 START TEST spdkcli_raid 00:47:25.656 ************************************ 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:47:25.656 * Looking for test storage... 00:47:25.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:25.656 05:38:12 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:25.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:25.656 --rc genhtml_branch_coverage=1 00:47:25.656 --rc genhtml_function_coverage=1 00:47:25.656 --rc genhtml_legend=1 00:47:25.656 --rc geninfo_all_blocks=1 00:47:25.656 --rc geninfo_unexecuted_blocks=1 00:47:25.656 00:47:25.656 ' 00:47:25.656 05:38:12 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:25.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:25.656 --rc genhtml_branch_coverage=1 00:47:25.656 --rc genhtml_function_coverage=1 00:47:25.656 --rc genhtml_legend=1 00:47:25.656 --rc geninfo_all_blocks=1 00:47:25.656 --rc geninfo_unexecuted_blocks=1 00:47:25.656 00:47:25.656 ' 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:25.657 --rc genhtml_branch_coverage=1 00:47:25.657 --rc genhtml_function_coverage=1 00:47:25.657 --rc genhtml_legend=1 00:47:25.657 --rc geninfo_all_blocks=1 00:47:25.657 --rc geninfo_unexecuted_blocks=1 00:47:25.657 00:47:25.657 ' 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:25.657 --rc genhtml_branch_coverage=1 00:47:25.657 --rc genhtml_function_coverage=1 00:47:25.657 --rc genhtml_legend=1 00:47:25.657 --rc geninfo_all_blocks=1 00:47:25.657 --rc geninfo_unexecuted_blocks=1 00:47:25.657 00:47:25.657 ' 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:47:25.657 05:38:12 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90334 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:47:25.657 05:38:12 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90334 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90334 ']' 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:25.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:25.657 05:38:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:25.914 [2024-12-09 05:38:12.776372] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:25.914 [2024-12-09 05:38:12.776560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90334 ] 00:47:26.172 [2024-12-09 05:38:12.980717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:26.429 [2024-12-09 05:38:13.155153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:26.429 [2024-12-09 05:38:13.155165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:27.363 05:38:14 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:27.363 05:38:14 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:47:27.363 05:38:14 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:47:27.363 05:38:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:27.363 05:38:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:27.363 05:38:14 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:47:27.363 05:38:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:27.363 05:38:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:27.363 05:38:14 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:47:27.363 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:47:27.363 ' 00:47:29.265 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:47:29.265 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:47:29.265 05:38:15 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:47:29.265 05:38:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:29.265 05:38:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:29.265 05:38:15 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:47:29.265 05:38:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:29.265 05:38:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:29.265 05:38:15 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:47:29.265 ' 00:47:30.200 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:47:30.200 05:38:17 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:47:30.200 05:38:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:30.200 05:38:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:30.459 05:38:17 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:47:30.459 05:38:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:30.459 05:38:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:30.459 05:38:17 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:47:30.459 05:38:17 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:47:31.028 05:38:17 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:47:31.028 05:38:17 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:47:31.028 05:38:17 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:47:31.028 05:38:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:31.028 05:38:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:31.028 05:38:17 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:47:31.028 05:38:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:31.028 05:38:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:31.028 05:38:17 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:47:31.028 ' 00:47:31.960 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:47:32.217 05:38:19 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:47:32.217 05:38:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:32.217 05:38:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:32.217 05:38:19 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:47:32.217 05:38:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:32.217 05:38:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:32.217 05:38:19 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:47:32.217 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:47:32.217 ' 00:47:33.590 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:47:33.590 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:47:33.853 05:38:20 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:47:33.853 05:38:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:33.854 05:38:20 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90334 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90334 ']' 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90334 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90334 00:47:33.854 killing process with pid 90334 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90334' 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90334 00:47:33.854 05:38:20 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90334 00:47:36.388 05:38:22 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:47:36.389 05:38:22 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90334 ']' 00:47:36.389 05:38:22 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90334 00:47:36.389 05:38:22 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90334 ']' 00:47:36.389 05:38:22 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90334 00:47:36.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90334) - No such process 00:47:36.389 Process with pid 90334 is not found 00:47:36.389 05:38:22 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90334 is not found' 00:47:36.389 05:38:22 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:47:36.389 05:38:22 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:47:36.389 05:38:22 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:47:36.389 05:38:22 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:47:36.389 ************************************ 00:47:36.389 END TEST spdkcli_raid 00:47:36.389 ************************************ 00:47:36.389 00:47:36.389 real 0m10.396s 00:47:36.389 user 0m21.272s 00:47:36.389 sys 0m1.374s 00:47:36.389 05:38:22 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:36.389 05:38:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:47:36.389 05:38:22 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:47:36.389 05:38:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:36.389 05:38:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:36.389 05:38:22 -- common/autotest_common.sh@10 -- # set +x 00:47:36.389 ************************************ 00:47:36.389 START TEST blockdev_raid5f 00:47:36.389 ************************************ 00:47:36.389 05:38:22 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:47:36.389 * Looking for test storage... 00:47:36.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:47:36.389 05:38:22 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:36.389 05:38:22 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:47:36.389 05:38:22 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:36.389 05:38:23 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:36.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:36.389 --rc genhtml_branch_coverage=1 00:47:36.389 --rc genhtml_function_coverage=1 00:47:36.389 --rc genhtml_legend=1 00:47:36.389 --rc geninfo_all_blocks=1 00:47:36.389 --rc geninfo_unexecuted_blocks=1 00:47:36.389 00:47:36.389 ' 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:36.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:36.389 --rc genhtml_branch_coverage=1 00:47:36.389 --rc genhtml_function_coverage=1 00:47:36.389 --rc genhtml_legend=1 00:47:36.389 --rc geninfo_all_blocks=1 00:47:36.389 --rc geninfo_unexecuted_blocks=1 00:47:36.389 00:47:36.389 ' 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:36.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:36.389 --rc genhtml_branch_coverage=1 00:47:36.389 --rc genhtml_function_coverage=1 00:47:36.389 --rc genhtml_legend=1 00:47:36.389 --rc geninfo_all_blocks=1 00:47:36.389 --rc geninfo_unexecuted_blocks=1 00:47:36.389 00:47:36.389 ' 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:36.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:36.389 --rc genhtml_branch_coverage=1 00:47:36.389 --rc genhtml_function_coverage=1 00:47:36.389 --rc genhtml_legend=1 00:47:36.389 --rc geninfo_all_blocks=1 00:47:36.389 --rc geninfo_unexecuted_blocks=1 00:47:36.389 00:47:36.389 ' 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90609 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:47:36.389 05:38:23 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90609 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90609 ']' 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:36.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:36.389 05:38:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:36.389 [2024-12-09 05:38:23.215262] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:36.389 [2024-12-09 05:38:23.215509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90609 ] 00:47:36.663 [2024-12-09 05:38:23.404146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:36.663 [2024-12-09 05:38:23.524412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:37.624 Malloc0 00:47:37.624 Malloc1 00:47:37.624 Malloc2 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:37.624 05:38:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:47:37.624 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:47:37.882 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e9fb66c2-8e3a-4343-9bab-5d3e68ef000c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e9fb66c2-8e3a-4343-9bab-5d3e68ef000c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e9fb66c2-8e3a-4343-9bab-5d3e68ef000c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "83c06c09-142a-4e8b-a0a5-f7f13dbe23ff",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6522ee04-940c-4e05-8fb0-cddaea40749f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "dff9d0ef-af3c-42fa-9479-66705dd8af84",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:47:37.882 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:47:37.882 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:47:37.882 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:47:37.882 05:38:24 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90609 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90609 ']' 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90609 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90609 00:47:37.882 killing process with pid 90609 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90609' 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90609 00:47:37.882 05:38:24 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90609 00:47:40.410 05:38:27 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:40.410 05:38:27 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:47:40.410 05:38:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:47:40.410 05:38:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:40.410 05:38:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:40.410 ************************************ 00:47:40.410 START TEST bdev_hello_world 00:47:40.410 ************************************ 00:47:40.410 05:38:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:47:40.667 [2024-12-09 05:38:27.394438] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:40.667 [2024-12-09 05:38:27.395021] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90682 ] 00:47:40.667 [2024-12-09 05:38:27.592311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:40.924 [2024-12-09 05:38:27.751216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:41.490 [2024-12-09 05:38:28.360034] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:47:41.490 [2024-12-09 05:38:28.360105] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:47:41.490 [2024-12-09 05:38:28.360161] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:47:41.490 [2024-12-09 05:38:28.360673] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:47:41.490 [2024-12-09 05:38:28.360891] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:47:41.490 [2024-12-09 05:38:28.360920] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:47:41.490 [2024-12-09 05:38:28.360984] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:47:41.490 00:47:41.490 [2024-12-09 05:38:28.361013] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:47:42.863 00:47:42.863 real 0m2.291s 00:47:42.863 user 0m1.835s 00:47:42.863 sys 0m0.329s 00:47:42.863 05:38:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:42.863 05:38:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:47:42.863 ************************************ 00:47:42.863 END TEST bdev_hello_world 00:47:42.863 ************************************ 00:47:42.863 05:38:29 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:47:42.863 05:38:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:42.863 05:38:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:42.863 05:38:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:42.863 ************************************ 00:47:42.863 START TEST bdev_bounds 00:47:42.863 ************************************ 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90724 00:47:42.863 Process bdevio pid: 90724 00:47:42.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90724' 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90724 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90724 ']' 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:42.863 05:38:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:42.863 [2024-12-09 05:38:29.741953] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:42.863 [2024-12-09 05:38:29.742351] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90724 ] 00:47:43.121 [2024-12-09 05:38:29.929742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:43.121 [2024-12-09 05:38:30.048680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:43.121 [2024-12-09 05:38:30.048999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:43.121 [2024-12-09 05:38:30.049003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:44.056 05:38:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:44.056 05:38:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:47:44.056 05:38:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:47:44.056 I/O targets: 00:47:44.056 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:47:44.056 00:47:44.056 00:47:44.056 CUnit - A unit testing framework for C - Version 2.1-3 00:47:44.056 http://cunit.sourceforge.net/ 00:47:44.056 00:47:44.056 00:47:44.056 Suite: bdevio tests on: raid5f 00:47:44.056 Test: blockdev write read block ...passed 00:47:44.056 Test: blockdev write zeroes read block ...passed 00:47:44.056 Test: blockdev write zeroes read no split ...passed 00:47:44.056 Test: blockdev write zeroes read split ...passed 00:47:44.314 Test: blockdev write zeroes read split partial ...passed 00:47:44.314 Test: blockdev reset ...passed 00:47:44.314 Test: blockdev write read 8 blocks ...passed 00:47:44.314 Test: blockdev write read size > 128k ...passed 00:47:44.314 Test: blockdev write read invalid size ...passed 00:47:44.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:44.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:44.314 Test: blockdev write read max offset ...passed 00:47:44.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:44.314 Test: blockdev writev readv 8 blocks ...passed 00:47:44.314 Test: blockdev writev readv 30 x 1block ...passed 00:47:44.314 Test: blockdev writev readv block ...passed 00:47:44.314 Test: blockdev writev readv size > 128k ...passed 00:47:44.314 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:44.314 Test: blockdev comparev and writev ...passed 00:47:44.314 Test: blockdev nvme passthru rw ...passed 00:47:44.314 Test: blockdev nvme passthru vendor specific ...passed 00:47:44.314 Test: blockdev nvme admin passthru ...passed 00:47:44.314 Test: blockdev copy ...passed 00:47:44.314 00:47:44.314 Run Summary: Type Total Ran Passed Failed Inactive 00:47:44.314 suites 1 1 n/a 0 0 00:47:44.314 tests 23 23 23 0 0 00:47:44.314 asserts 130 130 130 0 n/a 00:47:44.314 00:47:44.314 Elapsed time = 0.528 seconds 00:47:44.314 0 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90724 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90724 ']' 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90724 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90724 00:47:44.314 killing process with pid 90724 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90724' 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90724 00:47:44.314 05:38:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90724 00:47:45.686 ************************************ 00:47:45.686 END TEST bdev_bounds 00:47:45.686 ************************************ 00:47:45.686 05:38:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:47:45.686 00:47:45.686 real 0m2.795s 00:47:45.686 user 0m6.826s 00:47:45.686 sys 0m0.453s 00:47:45.686 05:38:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:45.686 05:38:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:45.686 05:38:32 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:47:45.686 05:38:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:47:45.686 05:38:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:45.686 05:38:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:45.686 ************************************ 00:47:45.686 START TEST bdev_nbd 00:47:45.686 ************************************ 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:47:45.686 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90785 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90785 /var/tmp/spdk-nbd.sock 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90785 ']' 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:45.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:45.687 05:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:45.687 [2024-12-09 05:38:32.578059] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:47:45.687 [2024-12-09 05:38:32.578214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:45.945 [2024-12-09 05:38:32.756855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:45.945 [2024-12-09 05:38:32.876041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:47:46.904 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:47.161 1+0 records in 00:47:47.161 1+0 records out 00:47:47.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453832 s, 9.0 MB/s 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:47:47.161 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:47.162 05:38:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:47:47.419 { 00:47:47.419 "nbd_device": "/dev/nbd0", 00:47:47.419 "bdev_name": "raid5f" 00:47:47.419 } 00:47:47.419 ]' 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:47:47.419 { 00:47:47.419 "nbd_device": "/dev/nbd0", 00:47:47.419 "bdev_name": "raid5f" 00:47:47.419 } 00:47:47.419 ]' 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:47.419 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:47.677 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:47.934 05:38:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:47:48.192 /dev/nbd0 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:48.192 1+0 records in 00:47:48.192 1+0 records out 00:47:48.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282838 s, 14.5 MB/s 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:48.192 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:48.450 { 00:47:48.450 "nbd_device": "/dev/nbd0", 00:47:48.450 "bdev_name": "raid5f" 00:47:48.450 } 00:47:48.450 ]' 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:48.450 { 00:47:48.450 "nbd_device": "/dev/nbd0", 00:47:48.450 "bdev_name": "raid5f" 00:47:48.450 } 00:47:48.450 ]' 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:48.450 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:47:48.709 256+0 records in 00:47:48.709 256+0 records out 00:47:48.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720636 s, 146 MB/s 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:48.709 256+0 records in 00:47:48.709 256+0 records out 00:47:48.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0407131 s, 25.8 MB/s 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:48.709 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:48.967 05:38:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:47:49.225 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:47:49.483 malloc_lvol_verify 00:47:49.484 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:47:49.741 cae2002b-b523-45ae-920b-21b41263f7c2 00:47:49.742 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:47:49.999 5fac3677-3447-4fea-87ab-6eff924a5e74 00:47:49.999 05:38:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:47:50.257 /dev/nbd0 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:47:50.257 mke2fs 1.47.0 (5-Feb-2023) 00:47:50.257 Discarding device blocks: 0/4096 done 00:47:50.257 Creating filesystem with 4096 1k blocks and 1024 inodes 00:47:50.257 00:47:50.257 Allocating group tables: 0/1 done 00:47:50.257 Writing inode tables: 0/1 done 00:47:50.257 Creating journal (1024 blocks): done 00:47:50.257 Writing superblocks and filesystem accounting information: 0/1 done 00:47:50.257 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:50.257 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90785 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90785 ']' 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90785 00:47:50.515 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90785 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:50.516 killing process with pid 90785 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90785' 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90785 00:47:50.516 05:38:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90785 00:47:52.418 05:38:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:47:52.418 00:47:52.418 real 0m6.483s 00:47:52.418 user 0m9.140s 00:47:52.418 sys 0m1.409s 00:47:52.418 05:38:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:52.418 ************************************ 00:47:52.418 END TEST bdev_nbd 00:47:52.418 ************************************ 00:47:52.418 05:38:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:52.418 05:38:39 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:47:52.418 05:38:39 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:47:52.418 05:38:39 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:47:52.418 05:38:39 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:47:52.418 05:38:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:52.418 05:38:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:52.418 05:38:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:52.418 ************************************ 00:47:52.418 START TEST bdev_fio 00:47:52.418 ************************************ 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:47:52.418 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:47:52.418 ************************************ 00:47:52.418 START TEST bdev_fio_rw_verify 00:47:52.418 ************************************ 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:47:52.418 05:38:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:47:52.677 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:47:52.677 fio-3.35 00:47:52.677 Starting 1 thread 00:48:04.888 00:48:04.888 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90987: Mon Dec 9 05:38:50 2024 00:48:04.888 read: IOPS=7995, BW=31.2MiB/s (32.8MB/s)(312MiB/10001msec) 00:48:04.888 slat (usec): min=21, max=1526, avg=31.36, stdev=10.07 00:48:04.888 clat (usec): min=12, max=1807, avg=198.97, stdev=78.75 00:48:04.888 lat (usec): min=40, max=1839, avg=230.33, stdev=80.13 00:48:04.888 clat percentiles (usec): 00:48:04.888 | 50.000th=[ 198], 99.000th=[ 367], 99.900th=[ 461], 99.990th=[ 848], 00:48:04.888 | 99.999th=[ 1811] 00:48:04.888 write: IOPS=8434, BW=32.9MiB/s (34.5MB/s)(326MiB/9891msec); 0 zone resets 00:48:04.888 slat (usec): min=11, max=263, avg=24.43, stdev= 8.30 00:48:04.888 clat (usec): min=83, max=1205, avg=455.64, stdev=70.04 00:48:04.888 lat (usec): min=105, max=1454, avg=480.07, stdev=72.03 00:48:04.888 clat percentiles (usec): 00:48:04.888 | 50.000th=[ 457], 99.000th=[ 635], 99.900th=[ 799], 99.990th=[ 1057], 00:48:04.888 | 99.999th=[ 1205] 00:48:04.888 bw ( KiB/s): min=29544, max=39352, per=98.84%, avg=33346.11, stdev=2080.75, samples=19 00:48:04.888 iops : min= 7386, max= 9838, avg=8336.53, stdev=520.19, samples=19 00:48:04.888 lat (usec) : 20=0.01%, 100=5.65%, 250=28.97%, 500=53.11%, 750=12.18% 00:48:04.888 lat (usec) : 1000=0.08% 00:48:04.888 lat (msec) : 2=0.01% 00:48:04.888 cpu : usr=98.04%, sys=0.80%, ctx=75, majf=0, minf=7065 00:48:04.888 IO depths : 1=7.8%, 2=19.9%, 4=55.2%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:04.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:04.888 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:04.888 issued rwts: total=79967,83428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:04.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:04.888 00:48:04.888 Run status group 0 (all jobs): 00:48:04.888 READ: bw=31.2MiB/s (32.8MB/s), 31.2MiB/s-31.2MiB/s (32.8MB/s-32.8MB/s), io=312MiB (328MB), run=10001-10001msec 00:48:04.888 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=326MiB (342MB), run=9891-9891msec 00:48:05.455 ----------------------------------------------------- 00:48:05.455 Suppressions used: 00:48:05.455 count bytes template 00:48:05.455 1 7 /usr/src/fio/parse.c 00:48:05.455 949 91104 /usr/src/fio/iolog.c 00:48:05.455 1 8 libtcmalloc_minimal.so 00:48:05.455 1 904 libcrypto.so 00:48:05.455 ----------------------------------------------------- 00:48:05.455 00:48:05.455 00:48:05.455 real 0m13.197s 00:48:05.455 user 0m13.556s 00:48:05.455 sys 0m0.842s 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:48:05.455 ************************************ 00:48:05.455 END TEST bdev_fio_rw_verify 00:48:05.455 ************************************ 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e9fb66c2-8e3a-4343-9bab-5d3e68ef000c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e9fb66c2-8e3a-4343-9bab-5d3e68ef000c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e9fb66c2-8e3a-4343-9bab-5d3e68ef000c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "83c06c09-142a-4e8b-a0a5-f7f13dbe23ff",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6522ee04-940c-4e05-8fb0-cddaea40749f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "dff9d0ef-af3c-42fa-9479-66705dd8af84",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:48:05.455 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:05.715 /home/vagrant/spdk_repo/spdk 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:48:05.715 00:48:05.715 real 0m13.434s 00:48:05.715 user 0m13.676s 00:48:05.715 sys 0m0.934s 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:05.715 05:38:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:48:05.715 ************************************ 00:48:05.715 END TEST bdev_fio 00:48:05.715 ************************************ 00:48:05.715 05:38:52 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:05.715 05:38:52 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:05.715 05:38:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:48:05.715 05:38:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:05.715 05:38:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:05.715 ************************************ 00:48:05.715 START TEST bdev_verify 00:48:05.715 ************************************ 00:48:05.715 05:38:52 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:05.715 [2024-12-09 05:38:52.633704] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:05.715 [2024-12-09 05:38:52.633960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91151 ] 00:48:05.974 [2024-12-09 05:38:52.833658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:06.233 [2024-12-09 05:38:53.003587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:06.233 [2024-12-09 05:38:53.003588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:06.800 Running I/O for 5 seconds... 00:48:08.695 11288.00 IOPS, 44.09 MiB/s [2024-12-09T05:38:57.046Z] 11802.50 IOPS, 46.10 MiB/s [2024-12-09T05:38:57.981Z] 12124.67 IOPS, 47.36 MiB/s [2024-12-09T05:38:58.918Z] 11609.75 IOPS, 45.35 MiB/s [2024-12-09T05:38:58.918Z] 11727.00 IOPS, 45.81 MiB/s 00:48:11.946 Latency(us) 00:48:11.946 [2024-12-09T05:38:58.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:11.946 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:48:11.946 Verification LBA range: start 0x0 length 0x2000 00:48:11.946 raid5f : 5.02 5880.47 22.97 0.00 0.00 32730.74 336.99 25380.31 00:48:11.946 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:11.946 Verification LBA range: start 0x2000 length 0x2000 00:48:11.946 raid5f : 5.03 5847.17 22.84 0.00 0.00 33030.06 155.46 25499.46 00:48:11.946 [2024-12-09T05:38:58.918Z] =================================================================================================================== 00:48:11.946 [2024-12-09T05:38:58.918Z] Total : 11727.64 45.81 0.00 0.00 32880.09 155.46 25499.46 00:48:13.323 00:48:13.323 real 0m7.737s 00:48:13.323 user 0m13.972s 00:48:13.323 sys 0m0.390s 00:48:13.323 05:39:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:13.323 ************************************ 00:48:13.323 END TEST bdev_verify 00:48:13.323 05:39:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:48:13.323 ************************************ 00:48:13.582 05:39:00 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:13.582 05:39:00 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:48:13.582 05:39:00 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:13.582 05:39:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:13.582 ************************************ 00:48:13.582 START TEST bdev_verify_big_io 00:48:13.582 ************************************ 00:48:13.582 05:39:00 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:13.582 [2024-12-09 05:39:00.419434] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:13.582 [2024-12-09 05:39:00.419628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91250 ] 00:48:13.840 [2024-12-09 05:39:00.607016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:13.840 [2024-12-09 05:39:00.759169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:13.840 [2024-12-09 05:39:00.759175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:14.776 Running I/O for 5 seconds... 00:48:16.655 504.00 IOPS, 31.50 MiB/s [2024-12-09T05:39:04.564Z] 569.00 IOPS, 35.56 MiB/s [2024-12-09T05:39:05.940Z] 592.00 IOPS, 37.00 MiB/s [2024-12-09T05:39:06.877Z] 571.00 IOPS, 35.69 MiB/s [2024-12-09T05:39:06.877Z] 609.20 IOPS, 38.08 MiB/s 00:48:19.905 Latency(us) 00:48:19.905 [2024-12-09T05:39:06.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:19.905 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:48:19.905 Verification LBA range: start 0x0 length 0x200 00:48:19.905 raid5f : 5.31 310.67 19.42 0.00 0.00 10170291.68 193.63 533820.51 00:48:19.905 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:48:19.905 Verification LBA range: start 0x200 length 0x200 00:48:19.905 raid5f : 5.41 305.06 19.07 0.00 0.00 10371447.56 215.97 545259.52 00:48:19.905 [2024-12-09T05:39:06.877Z] =================================================================================================================== 00:48:19.905 [2024-12-09T05:39:06.877Z] Total : 615.73 38.48 0.00 0.00 10270808.66 193.63 545259.52 00:48:21.832 00:48:21.832 real 0m8.108s 00:48:21.832 user 0m14.727s 00:48:21.832 sys 0m0.401s 00:48:21.832 05:39:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:21.832 ************************************ 00:48:21.832 05:39:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:48:21.832 END TEST bdev_verify_big_io 00:48:21.832 ************************************ 00:48:21.832 05:39:08 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:21.832 05:39:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:48:21.832 05:39:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:21.832 05:39:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:21.832 ************************************ 00:48:21.832 START TEST bdev_write_zeroes 00:48:21.832 ************************************ 00:48:21.832 05:39:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:21.832 [2024-12-09 05:39:08.603665] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:21.832 [2024-12-09 05:39:08.603895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91354 ] 00:48:21.832 [2024-12-09 05:39:08.797843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:22.091 [2024-12-09 05:39:08.943020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:22.658 Running I/O for 1 seconds... 00:48:24.034 16839.00 IOPS, 65.78 MiB/s 00:48:24.034 Latency(us) 00:48:24.034 [2024-12-09T05:39:11.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:24.034 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:48:24.034 raid5f : 1.01 16813.13 65.68 0.00 0.00 7578.29 2293.76 10545.34 00:48:24.034 [2024-12-09T05:39:11.006Z] =================================================================================================================== 00:48:24.034 [2024-12-09T05:39:11.006Z] Total : 16813.13 65.68 0.00 0.00 7578.29 2293.76 10545.34 00:48:25.410 00:48:25.410 real 0m3.756s 00:48:25.410 user 0m3.238s 00:48:25.410 sys 0m0.379s 00:48:25.410 05:39:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:25.410 ************************************ 00:48:25.410 END TEST bdev_write_zeroes 00:48:25.410 05:39:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:48:25.410 ************************************ 00:48:25.410 05:39:12 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:25.410 05:39:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:48:25.410 05:39:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:25.410 05:39:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:25.410 ************************************ 00:48:25.410 START TEST bdev_json_nonenclosed 00:48:25.410 ************************************ 00:48:25.410 05:39:12 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:25.669 [2024-12-09 05:39:12.418717] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:25.669 [2024-12-09 05:39:12.418986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91409 ] 00:48:25.669 [2024-12-09 05:39:12.621902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:25.927 [2024-12-09 05:39:12.773816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:25.927 [2024-12-09 05:39:12.774084] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:48:25.927 [2024-12-09 05:39:12.774126] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:25.927 [2024-12-09 05:39:12.774142] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:26.494 00:48:26.494 real 0m0.887s 00:48:26.494 user 0m0.605s 00:48:26.494 sys 0m0.174s 00:48:26.494 05:39:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:26.494 ************************************ 00:48:26.494 END TEST bdev_json_nonenclosed 00:48:26.494 ************************************ 00:48:26.494 05:39:13 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:48:26.494 05:39:13 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:26.494 05:39:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:48:26.494 05:39:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:26.494 05:39:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:26.494 ************************************ 00:48:26.494 START TEST bdev_json_nonarray 00:48:26.494 ************************************ 00:48:26.494 05:39:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:26.494 [2024-12-09 05:39:13.354996] Starting SPDK v25.01-pre git sha1 afe42438a / DPDK 24.03.0 initialization... 00:48:26.494 [2024-12-09 05:39:13.355153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91436 ] 00:48:26.752 [2024-12-09 05:39:13.552506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:26.752 [2024-12-09 05:39:13.702672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:26.752 [2024-12-09 05:39:13.702827] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:48:26.752 [2024-12-09 05:39:13.702860] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:26.752 [2024-12-09 05:39:13.702887] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:27.317 00:48:27.317 real 0m0.873s 00:48:27.317 user 0m0.587s 00:48:27.317 sys 0m0.179s 00:48:27.317 05:39:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:27.317 05:39:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:48:27.317 ************************************ 00:48:27.317 END TEST bdev_json_nonarray 00:48:27.317 ************************************ 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:48:27.317 05:39:14 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:48:27.317 00:48:27.317 real 0m51.300s 00:48:27.317 user 1m9.118s 00:48:27.317 sys 0m5.689s 00:48:27.317 05:39:14 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:27.317 05:39:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:27.317 ************************************ 00:48:27.317 END TEST blockdev_raid5f 00:48:27.317 ************************************ 00:48:27.317 05:39:14 -- spdk/autotest.sh@194 -- # uname -s 00:48:27.317 05:39:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:48:27.317 05:39:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:27.317 05:39:14 -- common/autotest_common.sh@10 -- # set +x 00:48:27.317 05:39:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:48:27.317 05:39:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:48:27.317 05:39:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:48:27.317 05:39:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:48:27.317 05:39:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:27.317 05:39:14 -- common/autotest_common.sh@10 -- # set +x 00:48:27.317 05:39:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:48:27.317 05:39:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:48:27.317 05:39:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:48:27.317 05:39:14 -- common/autotest_common.sh@10 -- # set +x 00:48:29.220 INFO: APP EXITING 00:48:29.220 INFO: killing all VMs 00:48:29.220 INFO: killing vhost app 00:48:29.220 INFO: EXIT DONE 00:48:29.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:29.478 Waiting for block devices as requested 00:48:29.478 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:48:29.737 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:30.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:30.563 Cleaning 00:48:30.563 Removing: /var/run/dpdk/spdk0/config 00:48:30.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:30.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:30.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:30.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:30.563 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:30.563 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:30.563 Removing: /dev/shm/spdk_tgt_trace.pid56800 00:48:30.563 Removing: /var/run/dpdk/spdk0 00:48:30.563 Removing: /var/run/dpdk/spdk_pid56571 00:48:30.563 Removing: /var/run/dpdk/spdk_pid56800 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57035 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57139 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57195 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57327 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57352 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57562 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57672 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57786 00:48:30.563 Removing: /var/run/dpdk/spdk_pid57908 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58016 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58061 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58103 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58179 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58290 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58765 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58841 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58922 00:48:30.563 Removing: /var/run/dpdk/spdk_pid58938 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59089 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59111 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59264 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59286 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59361 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59379 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59446 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59471 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59671 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59713 00:48:30.563 Removing: /var/run/dpdk/spdk_pid59797 00:48:30.563 Removing: /var/run/dpdk/spdk_pid61188 00:48:30.563 Removing: /var/run/dpdk/spdk_pid61400 00:48:30.563 Removing: /var/run/dpdk/spdk_pid61551 00:48:30.563 Removing: /var/run/dpdk/spdk_pid62211 00:48:30.563 Removing: /var/run/dpdk/spdk_pid62428 00:48:30.563 Removing: /var/run/dpdk/spdk_pid62568 00:48:30.563 Removing: /var/run/dpdk/spdk_pid63228 00:48:30.563 Removing: /var/run/dpdk/spdk_pid63558 00:48:30.563 Removing: /var/run/dpdk/spdk_pid63709 00:48:30.563 Removing: /var/run/dpdk/spdk_pid65122 00:48:30.563 Removing: /var/run/dpdk/spdk_pid65387 00:48:30.563 Removing: /var/run/dpdk/spdk_pid65527 00:48:30.563 Removing: /var/run/dpdk/spdk_pid66945 00:48:30.563 Removing: /var/run/dpdk/spdk_pid67204 00:48:30.563 Removing: /var/run/dpdk/spdk_pid67350 00:48:30.563 Removing: /var/run/dpdk/spdk_pid68772 00:48:30.563 Removing: /var/run/dpdk/spdk_pid69229 00:48:30.563 Removing: /var/run/dpdk/spdk_pid69369 00:48:30.563 Removing: /var/run/dpdk/spdk_pid70890 00:48:30.563 Removing: /var/run/dpdk/spdk_pid71163 00:48:30.563 Removing: /var/run/dpdk/spdk_pid71314 00:48:30.563 Removing: /var/run/dpdk/spdk_pid72828 00:48:30.563 Removing: /var/run/dpdk/spdk_pid73093 00:48:30.563 Removing: /var/run/dpdk/spdk_pid73244 00:48:30.563 Removing: /var/run/dpdk/spdk_pid74773 00:48:30.563 Removing: /var/run/dpdk/spdk_pid75276 00:48:30.563 Removing: /var/run/dpdk/spdk_pid75427 00:48:30.563 Removing: /var/run/dpdk/spdk_pid75572 00:48:30.563 Removing: /var/run/dpdk/spdk_pid76029 00:48:30.563 Removing: /var/run/dpdk/spdk_pid76803 00:48:30.563 Removing: /var/run/dpdk/spdk_pid77186 00:48:30.563 Removing: /var/run/dpdk/spdk_pid77905 00:48:30.563 Removing: /var/run/dpdk/spdk_pid78387 00:48:30.563 Removing: /var/run/dpdk/spdk_pid79184 00:48:30.821 Removing: /var/run/dpdk/spdk_pid79601 00:48:30.821 Removing: /var/run/dpdk/spdk_pid81603 00:48:30.821 Removing: /var/run/dpdk/spdk_pid82062 00:48:30.821 Removing: /var/run/dpdk/spdk_pid82513 00:48:30.821 Removing: /var/run/dpdk/spdk_pid84640 00:48:30.821 Removing: /var/run/dpdk/spdk_pid85131 00:48:30.821 Removing: /var/run/dpdk/spdk_pid85640 00:48:30.821 Removing: /var/run/dpdk/spdk_pid86720 00:48:30.821 Removing: /var/run/dpdk/spdk_pid87054 00:48:30.821 Removing: /var/run/dpdk/spdk_pid88021 00:48:30.821 Removing: /var/run/dpdk/spdk_pid88346 00:48:30.821 Removing: /var/run/dpdk/spdk_pid89312 00:48:30.821 Removing: /var/run/dpdk/spdk_pid89646 00:48:30.821 Removing: /var/run/dpdk/spdk_pid90334 00:48:30.821 Removing: /var/run/dpdk/spdk_pid90609 00:48:30.821 Removing: /var/run/dpdk/spdk_pid90682 00:48:30.821 Removing: /var/run/dpdk/spdk_pid90724 00:48:30.821 Removing: /var/run/dpdk/spdk_pid90972 00:48:30.821 Removing: /var/run/dpdk/spdk_pid91151 00:48:30.821 Removing: /var/run/dpdk/spdk_pid91250 00:48:30.821 Removing: /var/run/dpdk/spdk_pid91354 00:48:30.821 Removing: /var/run/dpdk/spdk_pid91409 00:48:30.821 Removing: /var/run/dpdk/spdk_pid91436 00:48:30.821 Clean 00:48:30.821 05:39:17 -- common/autotest_common.sh@1453 -- # return 0 00:48:30.821 05:39:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:48:30.821 05:39:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:30.821 05:39:17 -- common/autotest_common.sh@10 -- # set +x 00:48:30.821 05:39:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:48:30.821 05:39:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:30.821 05:39:17 -- common/autotest_common.sh@10 -- # set +x 00:48:30.821 05:39:17 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:30.821 05:39:17 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:30.821 05:39:17 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:30.821 05:39:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:48:30.821 05:39:17 -- spdk/autotest.sh@398 -- # hostname 00:48:30.821 05:39:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:31.080 geninfo: WARNING: invalid characters removed from testname! 00:49:03.157 05:39:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:03.157 05:39:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:06.443 05:39:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:09.728 05:39:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:13.012 05:39:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:16.298 05:40:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:18.833 05:40:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:18.833 05:40:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:49:18.833 05:40:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:49:18.833 05:40:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:18.833 05:40:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:49:18.833 05:40:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:19.092 + [[ -n 5259 ]] 00:49:19.092 + sudo kill 5259 00:49:19.125 [Pipeline] } 00:49:19.143 [Pipeline] // timeout 00:49:19.150 [Pipeline] } 00:49:19.165 [Pipeline] // stage 00:49:19.171 [Pipeline] } 00:49:19.185 [Pipeline] // catchError 00:49:19.194 [Pipeline] stage 00:49:19.196 [Pipeline] { (Stop VM) 00:49:19.208 [Pipeline] sh 00:49:19.491 + vagrant halt 00:49:22.770 ==> default: Halting domain... 00:49:29.345 [Pipeline] sh 00:49:29.627 + vagrant destroy -f 00:49:32.916 ==> default: Removing domain... 00:49:32.925 [Pipeline] sh 00:49:33.202 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:49:33.209 [Pipeline] } 00:49:33.224 [Pipeline] // stage 00:49:33.230 [Pipeline] } 00:49:33.240 [Pipeline] // dir 00:49:33.244 [Pipeline] } 00:49:33.255 [Pipeline] // wrap 00:49:33.260 [Pipeline] } 00:49:33.270 [Pipeline] // catchError 00:49:33.276 [Pipeline] stage 00:49:33.278 [Pipeline] { (Epilogue) 00:49:33.287 [Pipeline] sh 00:49:33.561 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:40.127 [Pipeline] catchError 00:49:40.129 [Pipeline] { 00:49:40.141 [Pipeline] sh 00:49:40.419 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:40.677 Artifacts sizes are good 00:49:40.685 [Pipeline] } 00:49:40.695 [Pipeline] // catchError 00:49:40.705 [Pipeline] archiveArtifacts 00:49:40.710 Archiving artifacts 00:49:40.804 [Pipeline] cleanWs 00:49:40.813 [WS-CLEANUP] Deleting project workspace... 00:49:40.813 [WS-CLEANUP] Deferred wipeout is used... 00:49:40.818 [WS-CLEANUP] done 00:49:40.819 [Pipeline] } 00:49:40.832 [Pipeline] // stage 00:49:40.836 [Pipeline] } 00:49:40.848 [Pipeline] // node 00:49:40.852 [Pipeline] End of Pipeline 00:49:40.884 Finished: SUCCESS